Confluent Parallel Consumer Adapter for SAP Integration Suite

Apache Kafka is a popular open source project for processing event-driven data streams based on a transaction log.

KaTe Confluent parallel consumer adapter extends the existing bullet proof Kafka Adapter for high throughput optimized scenarios on SAP integration suite or Edge integration cell (EIC).

The adapter inherits all functions of the KaTe Kafka adapter and complements it with parallel processing capabilities for high throughput.

Basic functions

The adapter allows you to publish Iflow messages as Kafka records and subscribe Kafka records as Iflow messages.

Kafka key, headers and data sections are translated into iflow message parts and vice versa. Topic & Partitions and technical reliability settings (Ack Mode/Consumer commits) are customizable per use case.

Message processing is observable and debuggable on channel & message log level without third party tooling fully integrated into SAP Cloud integration functionality.

Work secure with company & compliance standards

Our product supports all  commonly used security standards PLAIN, PLAIN_SASL, SASL_SSL, SASL-Plain/SASL-SCRAM, OAuth2 as well as client certificate authentication for broker and schema registry connections.

Support for high performance, resilience & flow control

The adapter allows parallelization on key level instead of only partition level which enables 10-500x higher throughput on single or even more on batched records without loosing functional order of related records.

Stay in charge what you want to process of a topic and how you will put load on your connected backends.

The product supports different methods to allow, filter and control message throughput on topics through:

  • Filtering: Kafka records can be filtered based on record headers, keys and data content before triggering an integration
  • Batching: A set of n Kafka records can be batched as single iflow message to optimize throughput
  • Parallelization can be adjusted with different modes (no order, order on key level) as well as how many threads process the records in parallel.

Full control of operations in one tool

  • Process records from start or end of stream or from any arbitrary position of the stream.
  • Process and re-process records
  • Deduplicate of necessary by offset or functional headers
  • Group names are feely definable and not set by us as vendor

Automated or manual error resolution

One common „challenge“ with ordered streaming systems like Kafka is, if errors occur, this could bring a total halt to processing if order is important. Our product allows you a manual as well as automated way to handle by e.g. forwarding it to an error destination or  document & ignore certain cases.

Serialization errors, data transformations errors or any down stream processing error (processing through an iflow).

Avro, Protocol Buffers, Json & schema registries

Our product supports typical binary data formats like Avro and Protobuf with Kafka typical optimizations like a schema registry.

The adapter can translate Avro and Protocol Buffers to XML and vice versa. Necessary translations of Avro/Protobuf schemas are generated with an included schema generation tool.

Transformation from binary (Avro/Protocol Buffers) and Json to XML and vice versa is supported

SAP roadmap compatible

Our Kafka Adapter is available for SAP Process Orchestration & Integration Suite or EIC (Edge integration cell).

This allows you to just import any developed interface and it will work without migration efforts, adjustments, refactoring or anything which is often the case with other adapters or vendors (including SAP).

Choose your own timing and roadmap for a possible migration plan. Your investment in integrations on SAP PO is easily transferable on Integration suite without unknown risks / commercial questions.

Difference to standard adapter for Cloud integration

  • 10-500x higher throughput as Standard adapter as parallelization and batching can be used on functional levels instead of only glued to partitions.
  • Standard adapter can’t use arbitrary positions or group names in stream for processing / reprocessing which is a very important feature to actually operate with Kafka.
  • Our product supports all data formats (Avro, Protocol buffers and schema registries, automated schema generation)
  • Our adapter allows batching into one iflow execution for hight throughput unlike the standard
  • The KaTe Parallel Consumer adapter allows manual as well as automated error resolution which is not the case in the standard version.
  • Filtering & Deduplication allows control to select exactly what you need on hight troughput topics unlike the standard
  • With the standard adapter there are no simple mechanisms to quickly understand consumer lag or record search in your message processing log. It must be done with third party tools (that might be hard to access for a SAP integration team) or manual developments.

Summary

FeatureSAP Standard Kafka AdapterConfluent Parallel Consumer Adapter
Parallelism modelLimited by number of Kafka partitionsConfigurable worker pool enabling high parallelism within a single consumer
Ordering guaranteesOrdering guaranteed only per partitionOrdering guaranteed per message key (optional), with parallel execution across keys
Throughput scalingRequires increasing partition count to scaleScales without increasing partitions, enabling 10–500× higher throughput
Iflow executionBlocking calls can stall message processing partly or fullyNon-blocking and asynchronous processing without blocking message processing
Automated Error handlingNot supported, some errors even not visible (deserialization)Errorhandling for deserialization / data transformation or iflow execution can be automatically handled for different error cateogries by logging the error and ignore it or foraward it to error / deadletter topics
Offset /Group Name controlNot supportedAny choice of start offset possible, Reprocessing of arbitrary offsets possible. Group Names manually configurable
Message acknowledgmentOffset commits at partition levelFine-grained, message-level acknowledgment and processing control
Advanced MonitoringNot supportedRich monitoring experience (consumer lag, 1:1 detection of which Kafka record produced which Iflow message) out of the box

Filtering

Not supported, only manual in the iflowFiltering per header or body, to decerease unnecessary load possible
Avro/Protobuf and JSON with schema registryNot supportedSupported, incl data transformation and schema generation to XML

Further Information:

SAP Appcenter listing:

KaTe Kafka Adapter for SAP PO im SAP Appcenter

Our Blog on SAP Community

Hook your SAP landscape to Azure Event Hubs with KaTe Kafka Adapter & SAP PO

Pay what you use

All of our adapters are subscription licensed, which simplifies use at the project level or in limited use cases.

30 days trial

All our adapters are available for a 30-day trial period.

Verified SAP Partner

All our adapters are developed in close cooperation with SAP and our verified SAP experts.

FAQ

feel free to
ask away!

The Confluent Parallel Consumer Adapter for SAP Integration Suite is a powerful extension of the KaTe Kafka adapter to connect SAP systems with Apache Kafka with extreme high throughput requirements. Kafka and SAP can be connected– both on-premises and in the SAP Business Technology Platform (BTP). It enables seamless processing of event-driven data streams and bi-directional message exchange between SAP Cloud Integration Iflows and Kafka Topics at unprecedented scale. As the successor of the Kafka Adapter for SAP Process Orchestration, it provides 1:1 compatability migrating existing SAP-Kafka integration from PO.

The adapter allows you to publish SAP Cloud Integration Iflow messages as Kafka records and subscribe to Kafka records as Iflow messages. Kafka keys, headers, and data are translated into Iflow messages and vice versa.

Different than typical Kafka consumers in order processing capabilities of the adapter are not limited to partitions but can be scaled up to record key level or even no ordering can be used which enables extreme high througput.


Aside it fully supports popular data formats like Avro, Protocol Buffers, and JSON, including Schema Registry integration and automatic schema generation for seamless conversion to and from XML. Topics, partitions, and reliability settings (e.g., acknowledgement mode, consumer commits) are highly configurable. Additionally, message processing is fully integrated into SAP Cloud Integration, making it fully traceable and debuggable without third-party tools.

The Confluent Parallel Consumer Adapter for SAP Integration Suite complies with modern Kafka security standards, including PLAIN, PLAIN_SASL, SASL_SSL, SASL-Plain/SASL-SCRAM, OAuth2 and client certificate authentication. This ensures secure data transmission aligned with corporate security policies.

The adapter is designed for high-performance streaming, offering batch processing, message filtering, and massive parallelization on record key level. Messages can be filtered based on headers, keys, or data content and grouped into optimized batches to improve throughput.

Parallel processing options on key level or even with no order enable extreme hight throughput (10.x-500x) compared to standard Kafka consumers.

For reliable message processing, the adapter provides both automated and manual error handling. Common issues like serialization errors, data transformation errors, or Iflow processing errors can be forwarded to an error target or deduplicated, ensuring uninterrupted data flows.

Unlike the SAP standard Kafka adapter, the KaTe Kafka Adapter offers:

  • Extreme high throughput 10-500x compared to the SAP Standard Kafka adapter by parallelization options.
  • Processing from any stream position with custom group names

  • Full support for Avro, Protocol Buffers, JSON, and Schema Registries with automatic schema generation

  • Batch processing within a single Iflow execution

  • Advanced manual and automated error handling

  • Filtering and deduplication for optimized data quality and throughput.

SAP Technology and Migration Consulting –
Future-Proof Integration with KaTe GmbH


KaTe GmbH is your trusted partner for
SAP technology consulting and SAP migration projects.

We specialize in ensuring a smooth and secure transformation of existing SAP PO (Process Orchestration) landscapes to the modern SAP Integration Suite, including the SAP Edge Integration Cell for hybrid integration scenarios. With tailored concepts, proven best practices, and deep expertise, we make sure your SAP integration remains sustainable, high-performing, and future-ready.

Our Core Expertise in SAP Technology Consulting
  • SAP PO to SAP Integration Suite Migration – including analysis, architecture design, and technical implementation

  • SAP Edge Integration Cell Implementation and Optimization – enabling flexible and secure hybrid integrations

  • SAP BTP (Business Technology Platform) Consulting – leveraging modern services and tools for innovation and process automation

  • Technology-Driven Strategy Consulting – from cloud integration to API management

Contact us today for a personalized consultation and discover how we can modernize your SAP integration and system landscape

Contact us - now!

Get in touch with us by calling or emailing us - we'll be happy to help!

Contact