Confluent Releases New Features to Make Data Streaming Simple – Database Trends and Applications

Confluent, Inc., the platform to set data in motion, announced the Confluent Q1 22 Launch, including new additions to fully managed data streaming connectors, new controls for cost-effectively scaling massive-throughput Apache Kafka clusters, and a new feature to help maintain trusted data quality across global environments.

These innovations help enable simple, scalable, and reliable data streaming across the business, so any organization can deliver the real-time operations and customer experiences needed to succeed in a digital-first world, according to the company.

The real-time operations and experiences that set organizations apart in todays economy require pervasive data in motion, said Ganesh Srinivasan, chief product officer, Confluent. In an effort to help any organization set their data in motion, weve built the easiest way to connect data streams across critical business applications and systems, ensure they can scale quickly to meet immediate business needs, and maintain trust in their data quality on a global scale.

Confluents newest connectors include Azure Synapse Analytics, Amazon DynamoDB, Databricks Delta Lake, Google BigTable, and Redis for increased coverage of popular data sources and destinations.

Available only on Confluent Cloud, Confluents portfolio of over 50 fully managed connectors helps organizations build powerful streaming applications and improve data portability.

These connectors, designed with Confluents deep Kafka expertise, provide organizations an easy path to modernizing data warehouses, databases, and data lakes with real-time data pipelines:

To simplify real-time visibility into the health of applications and systems, Confluent announced first-class integrations with Datadog and Prometheus.

With a few clicks, operators have deeper, end-to-end visibility into Confluent Cloud within the monitoring tools they already use.

Another new feature introduced in this update are new controls to expand and shrink GBps+ cluster capacity enhance elasticity for dynamic, real-time business demands.

Paired with Confluents new Load Metric API, organizations can make informed decisions on when to expand and when to shrink capacity with a real-time view into utilization of their clusters. With this new level of elastic scalability, businesses can run their highest throughput workloads with high availability, operational simplicity, and cost efficiency.

Global data quality controls are critical for maintaining a highly compatible Kafka deployment fit for long term, standardized use across the organization.

With the addition of Schema Linking, businesses now have a simple way to maintain trusted data streams across cloud and hybrid environments with shared schemas that sync in real time.

Paired with Cluster Linking, schemas are shared everywhere theyre needed, providing an easy means of maintaining high data integrity while deploying use cases including global data sharing, cluster migrations, and preparations for real-time failover in the event of disaster recovery.

For more information about these updates, visit http://www.confluent.io.

Read more from the original source:
Confluent Releases New Features to Make Data Streaming Simple - Database Trends and Applications

Related Posts

Comments are closed.