Open APIs

 View Only
  • 1.  TMF639 data model encapsulated in Apache Kafka messages

    TM Forum Member
    Posted Aug 14, 2020 09:25
    Coming from an ITSM/ITIL world where our IT assets are held in a graph database service model (CMDB).

    We want to share these assets with TMF oriented systems. It seems that TMF 639 is the right API / data model for this, and will also enable other network and telco focused inventory mgmt systems to participate.

    Our TMF systems are connected to Apache Kafka, which will enable near-real-time updates from the inventory systems - which is desireable.

    Has anyone done this type of  data flow before, using Kafka as the intermediary?

    thanks in advance... Brad

    ------------------------------
    Brad Taylor
    TELUS
    ------------------------------


  • 2.  RE: TMF639 data model encapsulated in Apache Kafka messages

    TM Forum Member
    Posted Aug 16, 2020 04:38
    Hi Brad

    Amdocs is using Kafka extensively for data flows and inter-system communication, not necessarily specifically in the Resource Inventory area.
    Our event model is basically the same as our API payload model, and thus aligned with the relevant TMF Open APIs.

    Let's separate between the model and the communications.

    For the communications, Kafka has many advantages, but you need to be aware of possible errors and recovery paths. For example, what do you if your message was posted successfully but was not processed by some (or all) of the relevant listeners.

    For the model, the inventory model is generic and catalog-driven, it probably doesn't make sense to use TMF639 without defining your resource structures in TMF634 (Resource Catalog). So I advise you to examine both APIs to see if you have a good fit to your assets.

    Without knowing what your exact use cases are it is a bit difficult to relate explicitly, but hope these ramblings help.
    You might get more additional assistance or advice from @Dave Milham (he is pushing for a cross-domain topology model and API) and @Vance Shipley (much experience in catalog and inventory management, especially for resource functions).​​

    ------------------------------
    Jonathan Goldberg
    Amdocs Management Limited
    Any opinions and statements made by me on this forum are purely personal, and do not necessarily reflect the position of the TM Forum or my employer.
    ------------------------------



  • 3.  RE: TMF639 data model encapsulated in Apache Kafka messages

    TM Forum Member
    Posted Aug 25, 2020 13:38
    Thanks for the reply @Jonathan Goldberg

    Many things to think about.

    Our use case is really about making IT assets info available to other systems, particularly event management systems that want to enrich events with asset information when creating incidents (using ITIL language).​ Using the stream/table duality in Kafka, subscribers can read the current IT Asset state as a table... at least that's the theory!

    If we are / were looking at incident creation etc via Kafka then we would be using the create incident / incident created pattern to signal the creation of the incident back to the event mgmt system (and those APIs are defined, I think).

    The practical question I have - and maybe you can answer how Amdocs does this - is what is the structure / format that gets published to the topic? My understanding is Kafka is a stream of property:value, property:value duets, whereas a TMF639 entity looks like this (based on the TMF639 Resource Inventory API User Guide v4.0.1, page 9, example from page 16):

    { "id": "45", "href": "http://server:port/resourceInventoryManagement/physicalResource/45", "publicIdentifier": "07467223333", "@type": "Equipment", "@baseType": "PhysicalResource", "@schemaLocation": "//server:port/resourceInventoryManagement/schema/Equipment.yml", "category": "Category 1", "lifecyleState": "Active", "manufactureDate": "2007-04-12", "serialNumber": "123456745644", "versionNumber": "11", "resourceSpecification": { "id": "6", "href": "http://server:port/resourceCatalogManagement/resourceSpecification/6", "@type": "PhysicalResourceSpecification" }, "resourceCharacteristic": [{ "name": "physicalPort", "value": { "@type": "physicalPort", "name": "LAN Port", "isActive": true }, "@schemaLocation": "//host:port/schema/physicalPort.yml" }, { "name": "color", "value": "red" } ], "resourceRelationship": [{ "type": "requires", "resource": { "id": "46", "href": "http://server:port/resourceInventoryManagement/logicalResource/46" }, "resourceRelationshipCharacteristic": [{ "name": "priority", "value": 2 }, { "name": "accuracy", "value": { "@type": "accuracy", "unit": "second", "amount": "5" }, "@schemaLocation": "http:server:port//resourceInventoryManagement/schema/accurancy.yml" } ] } ], "relatedParty": [{ "role": "Manufacturer", "id": "43", "href": "http://serverlocation:port/PartyManagement/individual/43" } ], "resourceAttachment": [{ "id": "http://server:port/documentManagement/document/123" } ], "note": [{ "text": "something about this resource" } ], "place": { "id": "1979", "href": "https://host:port/genericCommon/place/1979", "name": "Main Office", "role": "default delivery" } }​


    Does Amdocs (or other implementations, @Dave Milham, @Vance Shipley) attach the JSON as a single value or a series of values...?

    Thanks a lot for your guidance.

    Best regards... 





    ​​

    ------------------------------
    Brad Taylor
    TELUS
    ------------------------------



  • 4.  RE: TMF639 data model encapsulated in Apache Kafka messages

    TM Forum Member
    Posted Aug 26, 2020 01:41
    Does Amdocs ...   Well, that would be telling, wouldn't it :) 

    But seriously, I would imagine that any sane use of Kafka would present an entire document (e.g. JSON fragment) to a Kafka topic as a single event/message. Kafka "knows" nothing about the payload, that's not its business. To quote from the Kafka documentation here:

    Messages consist of a variable-length header, a variable length opaque key byte array and a variable length opaque value byte array. The format of the header is described in the following section.

    The only reason, perhaps, to split up a message would be due to size and performance considerations. We have found, here at Amdocs, that messages over about 1MB have caused performance issues. But the price of splitting a message is the need for message consumers to handle that, to "know" that the message is incomplete and to combine the fragments together again. So more complexity, only do it if you have to.

    Hope it helps


    ------------------------------
    Jonathan Goldberg
    Amdocs Management Limited
    Any opinions and statements made by me on this forum are purely personal, and do not necessarily reflect the position of the TM Forum or my employer.
    ------------------------------



  • 5.  RE: TMF639 data model encapsulated in Apache Kafka messages

    TM Forum Member
    Posted Aug 26, 2020 13:09
    Thanks for the quick reply @Jonathan Goldberg!

    To get started quickly we just used the Kafka Connect JDBC connector. That represents db views/tables as a flat structure... whereas the TMF639 format is quite nested.

    I think our next step is to build the db output as a single field that contains the full JSON that complies with TMF639 (or build a KC connector that has the application API and can emit the JSON).

    Thanks for the dialog - helped me a lot.

    cheers... Brad

    ------------------------------
    Brad Taylor
    TELUS
    ------------------------------



  • 6.  RE: TMF639 data model encapsulated in Apache Kafka messages

    TM Forum Member
    Posted Aug 27, 2020 08:54
    Hi Brad ,

    we have extensive experience using kafka and TMF API's in our OSS projects , beyond that we have worked alot with ONAP and they are highly Event Based and wrap Kafka in an http interface to manage the event producers and consumers (DMAAP is the name of the component in ONAP).  In particular , I work in the Resource Trouble Management domain and Resource Inventory Domain.

    We take what is essentially the same output of the TMF API (JSON) and publish that to Kafka , how we do that may vary but we only use alternate formats (JDBC to Kafka) when that would be part of a larger transformation stream using something like kafka streams.  We often transform the JSON to avro for efficiency reason.  Given a reasonable payload,  JSON is simpler.

    Lastly, look at the Event standards with the API's and how they propose to format those payloads , this is an option as well.  A New API TMF688 is in development and could be looked at with your architecture team as a related option , we are looking to leverage it as well. 

    I hope this gives you a sense of how kafka and the TMF JSON API output can operate.   


    ------------------------------
    brian Keeley
    CGI Info Systems Management Consulting Inc.
    ------------------------------