Open APIs

 View Only
  • 1.  TMF 639 and multiple non-related resources bulk creation

    Posted Dec 06, 2023 02:51

    Dear Sirs,

    I need your advice/guide on how to correctly implement TMF 639 when multiple resources that are not directly related need to be created in a batch.

    as an example, I might have thousands of different logical resources that are not necessarily linked by resource relationships but I would need to create them quickly as part of a synchronization process (thus the need of a bulk end-point).

    TMF 639 has no indication of anything like that. What would the recommendation be?

    Cheers



    ------------------------------
    Valerio Santinelli

    ------------------------------


  • 2.  RE: TMF 639 and multiple non-related resources bulk creation

    TM Forum Member
    Posted Dec 11, 2023 04:29

    The naïve REST paradigm does not support creation of multiple entities using a POST operation.

    The design guidelines for v5, not published yet, will suggest the ability to create multiple entities using PATCH at the collection level, with JSON Patch operator add.

    However using this pattern you would need to deal with issues like partial success.

    An alternative would be to use ImportJob tailored for your particular use case.

    Hope it helps.



    ------------------------------
    Jonathan Goldberg
    Amdocs Management Limited
    Any opinions and statements made by me on this forum are purely personal, and do not necessarily reflect the position of the TM Forum or my employer.
    ------------------------------



  • 3.  RE: TMF 639 and multiple non-related resources bulk creation

    Posted Dec 11, 2023 06:43

    Thanks Jonathan, I am not aware of ImportJobs in the TMF ecosystem. How are they defined? Do you have any pointers to those? 

    I have also been reasoning about defining a new type of Resource, let's say ResourceList that could have an array of Resources as a ResourceCharacteristic. Then it would be up to the service to decompose the array into the single resources and storing them in the local database.

     



    ------------------------------
    Valerio Santinelli
    TO BE VERIFIED
    ------------------------------



  • 4.  RE: TMF 639 and multiple non-related resources bulk creation

    TM Forum Member
    Posted Dec 11, 2023 12:36

    ImportJob (and ExportJob) is used in the catalog APIs (TMF620, TMF633, TMF634), but it is a completely generic structure and could be used for other entity imports as well.

    I'm not enamored of your solution suggestion, since an array of resources is not a resource. You could achieve the same effect by defining a dedicated Task resource that would hold an array of resources as input, and a correlated array of generated IDs (for successes) and errors (for failures). But you need to consider whether working with large payloads is the best way to go.



    ------------------------------
    Jonathan Goldberg
    Amdocs Management Limited
    Any opinions and statements made by me on this forum are purely personal, and do not necessarily reflect the position of the TM Forum or my employer.
    ------------------------------



  • 5.  RE: TMF 639 and multiple non-related resources bulk creation

    Posted Dec 12, 2023 11:12

    Let me try to give you some context to better understand my use case. I am also not so fond about my suggested approach.

    The topology I am working with is made of a central location cluster that lives in a data center and a high number of edge locations that are clusters distributed in different regions.

    Both the central location and the edge locations run an inventory service. The central location contains the data of all the edge locations and each edge location only contains the data related to its own inventory.

    Those inventories must be kept in sync. To simplify we can say that the central location is managing all the data and whenever a new resource is created or changed, it gets synchronized to the corresponding edge inventory. 

    One of the issues we have is that we can't just transfer those resources one by one as the data changes often and it takes way too much time to make one HTTP call per resource. We need to transfer them in bulk, ideally about 150 resources per HTTP call.

    In our previous implementation we created out own PUT end-point for bulk creation/replacement of resources that would get an array of resources as input.

    In the new implementation we would like to achieve this using a 100% compliant TMF approach and this is where I am failing at finding the correct way to achieve that. Maybe we should just stick to providing some additional non-TMF end-points or maybe there is a better approach than what we discussed so far. Any help in finding the right direction is really appreciated. 



    ------------------------------
    Valerio Santinelli
    TO BE VERIFIED
    ------------------------------



  • 6.  RE: TMF 639 and multiple non-related resources bulk creation

    TM Forum Member
    Posted Dec 13, 2023 02:54

    Valerio,

    You have edge inventories and a central inventory which is the set of all edge inventories. It's strange that you say you need to push items from the central inventory to the edge, one would expect that inventory changes at the edge would be pushed to the central mirror, with the edge being the authoritative version (source of truth), but you do you.

    The topic of Catalog synchronization was addressed fairly thoroughly in the BOS Catalyst (DTW2019) which contributed to IG1222 ODA Technical Architecture Part 4 – ODA Patterns.  Inventory synchronization would follow the same principles.  We describe how Gartner's Master Data Management (MDM) patterns apply and provided detailed API examples for a few identified patterns including:

    • Yellow Pages
    • Aggregated
    • Proxy
    • Overide

    You say you have an efficient solution for synchronization however you want something which would be TMF compliant. IMHO you can use any mechanism you wish to synchronize the data between sites, that doesn't necessarily change the compliance of any single Open API endpoint.

    Let's say a TMF639 interface at central receives a POST and creates a new Resource. You could arrange to have that Resource appear in the correct edge inventory and be immediately available for GET on TMF639 at that edge. If you are supporting notifications, and there is a subscriber at the edge, you create an event to notify the creation when the synchronization happens. It doesn't matter that the entity was created by some other protocol, logically there was a ResourceCreateEvent, the notification event reports the new Resource. Both TMF639 implementations are perfectly compliant in this scenario.



    ------------------------------
    Vance Shipley
    SigScale
    ------------------------------



  • 7.  RE: TMF 639 and multiple non-related resources bulk creation

    Posted Dec 13, 2023 04:22

    Hi Vance,

    thanks for the clarification. I would like to have a look at the ODA Patterns you mentioned but that download is for members only and, although I am working as an external for a company which is a member, I do not have access myself to that document.

    Regarding my use-case, I tried to keep it simple just for reasoning purposes. The actual use case is a bit more complicated. The logical resources are mostly created at the central location and replicated at the edge because the planning is happening at the central location and that is the source of truth. Only one specific type of logical resource gets created at the edges and is synchronized back to the central location. 

    And then for physical resources, they all get created and updated at the edges and synchronized to the central location. 

    And then we also have some other support classes that are part of the logical resources that get created and updated at the edges and synchronized to the central location.

    As you can see the scenario is a bit more complicated because synchronization is happening in both directions based on different classes of resources.

    And on top of that we also have one more layer of synchronization to an external system that we do not own and that is both providing data and receiving updates. 

    That's why the synchronization process is so important and it has to be solid and fast to keep all this data in sync at almost real time. 

    The fact that you are saying that we can introduce whatever process we want to keep the data in sync as long as we, let's say, do not change the behavior of the TMF defined APIs is reassuring. 



    ------------------------------
    Valerio Santinelli
    TO BE VERIFIED
    ------------------------------