Open APIs

Expand all | Collapse all

TMF 688 logic

  • 1.  TMF 688 logic

    TM Forum Member
    Posted 19 days ago

    Hi,

     

    I'm investigating TMF 688 and finished reading the current version of the API reference documentation. The API and data model is quite clearly described but nearly nothing is said on how it should behave thus I would like to clarify some points with you.

     

    Let's assume for the sake of example that the event bus is a JMS implementation with which I am very familiar.

    As I see it:

    1. TMF 688 application can be seen as a TMF wrapper around the Event Bus
    2. In a cloud based environment, multiple instances of TMF688 application may run in // on the same event bus.

     

    This is how I understand how things will happen before and behind the scenes:

    1. An event producer application will create a topic using POST /topic (if it does not exist yet). The topic details will be stored in a database and trigger the creation of a topic/queue at JMS level. A JMS consumer will be created on the topic to receive the events.
    2. Event consumer applications will register to the topic using POST /topic/{topicId}/hub. The listeners details will be stored in the database attached to the topic.
    3. The event producer application will send events when needed using POST/topic/{topicId}/event. This will use a JMS producer to post the event on the JMS topic/queue. The events may be stored in the database attached to the topic (see last section).
    4. The JMS consumer will receive the events, read the database to retrieve the list of listeners and send the events to each listener (if needed depending on the query part) using POST /listener/listenToXXX TMF REST call (similar to the one used in the classic TMF 630 Notification).

     

    I'm considering using JMS queues instead of topics because multiple instances of TMF688 can run. If JMS topics were used, each event would be delivered to the corresponding consumer in each instance and each consumer would send the event to all registered listeners ending in the event being sent multiple times to the same listener (once per TMF 688 instance).

     

    According to the documentation, the presence of GET operations on /topic/{topicId}/event suggests that the events can/should be stored in the database as well and thus TMF 688 could support pull in addition to the default push way of accessing events (event bus implementations could use internal storage as well but usually events are stored only until delivered to all consumers).

     

    Am I correct in my understanding?

     

    Best regards,



    ------------------------------
    Frederic Thise
    Proximus SA
    ------------------------------


  • 2.  RE: TMF 688 logic

    TM Forum Member
    Posted 4 days ago
    Hi All,

    We are also looking to use 688 in our implementation and are a bit confused.

    The user guide seems to suggest that such an implementation can be satisfied by technology such as Kafka or RabbitMQ.

    The diagram seems to suggest that you can use the schema registry of the determined product and then expose the topcis to the producers and consumers that live within your application eco system.



    What i find strange is that you would have to also have an database alongside something like kafka to facilitate event streaming. This will mean the event is being stored twice.

    For this reason we are considering using the TMF688 data model but not implementing the API endpoints.

    It would be fantastic to get some thoughts on this

    ------------------------------
    David Whitfield
    TalkTalk Group
    ------------------------------



  • 3.  RE: TMF 688 logic

    TM Forum Member
    Posted 4 days ago
    Hi All,

    We have similar questions here at Telstra.
    We are looking to adopt TMF688 to broadcast event notifications from multiple domains (in response to TMF6xx API requests) to multiple consumers.
    The TMF688 user guide release 4.0 implies that the HTTP/REST operations are a wrapper around the message broker technology. In this situation, the event management platform still needs to translate from REST/HTTP to the messaging protocol. The message producer and subscriber (consumer) may not be able to completely leverage all the benefits of the queues/topic as they are not directly binding themselves to the messaging technology (what ever it may be, kafka, rabbitmq, jms).
    Is TMF planning to update the TMF688 user guide with a standard for API operations in a messaging protocol (non HTTP/REST) format?

    Thanks,
    Anu

    ------------------------------
    Anu Aulakh
    Telstra Corporation
    ------------------------------



  • 4.  RE: TMF 688 logic

    TM Forum Member
    Posted 6 hours ago
    Hi,

    the TMF688 Event API is a resource based API  like all other API's. It is a generic wrapper for Event (streaming) plattform supporting the PUB/SUB pattern.

    The Interface ist currently defined as a REST Interface. (/event  OP: POST, GET ,PATCH, DELETE, /hub POST, DELETE  CallBack POST for Notifications).
    Because of the simplicity of the API description by swagger/OAS.

    If your Event  platform (HUB) implementation supports the PUB/SUB pattern, then you can use other IF protocols than REST e.g. MQTT,.

    For the future it is planed to define for the Event API additional IF protocols by a meta language e.g. AsynchAPI 2.0.

    But keep in mind the important usage of the event API is not the definition of an Interface but the usage of the different resources (Topic/Evnet/Hub) to support the Pub/Sub pattern for Event notifications. (@ sourcing & storing  & notification pattern (Martin Fowler)).

    KR Thomas


    ------------------------------
    Thomas Braun
    Deutsche Telekom AG
    ------------------------------



  • 5.  RE: TMF 688 logic

    TM Forum Member
    Posted 4 days ago
    Hi community,

    I agree with David that the querying of events from an event database is counter productive when you already have the event in the Kafka streaming platform.
    We are currently developing on an implementation based on Kafka but only intend to implement the following endpoints:

    • POST /topic: Create a new topic
    • GET /topic: List the existing topics
    • DELETE /topic: Delete a topic
    • POST /topic/{id}/event: Publish an event on a topic
    • POST /topic/{id}/hub: Register a webhook (with filter)
    • GET /topic/{id}/hub: List the webhook registrations
    • GET /topic/{id}/hub/{id}: Get info regarding a webhook including performance statistics
    • DELETE /topic/{id}/hub/{id}: Remove registration
    The registering webhook with filter is far more efficient than querying a database.
    By filtering on the eventTime field it is possible to set a starting point in the past. This way events can be replayed to a webhook going back as far in the time as the retention policy allows for. The webhook will receive the events one at a time, removing entirely the time out issues that are typical for querying large data sets.

    We intend to offer our Kafka based implementation commercially later this year.
    Feel free to contact me if you are interested in beta-testing.


    ------------------------------
    Koen Peeters
    Ciminko Luxembourg
    ------------------------------



  • 6.  RE: TMF 688 logic

    TM Forum Member
    Posted 6 hours ago
    Hi David,
    as I mentioned  into another blog  (10.  RE: TMF 688 Event API and Event resource in other APIs) , that all Event API resources (Event,Topic,Hub) should be implemented by that streaming platform. (e.g. KAFKA).
    This means the Event is stored only once (into the resource "Event" or Topic/event resources).

    The Schema Registry is only for schemaLocation refererence reasons, but a necessary component, if not all schemas (into the event) are known and can not be referenced  @the client wants to see e.g. the event payload.


    ------------------------------
    Thomas Braun
    Deutsche Telekom AG
    ------------------------------