Listado de la etiqueta: Foundation

This post is authored by Hayden Blauzvern and originally appeared on Sigstore’s blog. Sigstore is a new standard for signing, verifying, and protecting software. It is a project of the Linux Foundation. 

Developers, package maintainers, and enterprises that would like to sigstore logo adopt Sigstore may already sign published artifacts. Signers may have existing procedures to securely store and use signing keys. Sigstore can be used to sign artifacts with existing self-managed, long-lived signing keys. Sigstore provides a simple user experience for signing, verification, and generating structured signature metadata for artifacts and container signatures. Sigstore also offers a community-operated, free-to-use transparency log for auditing signature generation.

Sigstore additionally has the ability to use code signing certificates with short-lived signing keys bound to OpenID Connect identities. This signing approach offers simplicity due to the lack of key management; however, this may be too drastic of a change for enterprises that have existing infrastructure for signing. This blog post outlines strategies to ease adoption of Sigstore while still using existing signing approaches.

Signing with self-managed, long-lived keys

Developers that maintain their own signing keys but want to migrate to Sigstore can first switch to using Cosign to generate a signature over an artifact. Cosign supports importing an existing RSA, ECDSA, or ED25519 PEM-encoded PKCS#1 or PKCS#8 key with cosign import-key-pair –key key.pem, and can sign and verify with cosign sign-blob –key cosign.key artifact-path and cosign verify-blob –key artifact-path.


  • Developers can get accustomed to Sigstore tooling to sign and verify artifacts.
  • Sigstore tooling can be integrated into CI/CD pipelines.
  • For signing containers, signature metadata is published with the OCI image in an OCI registry.

Signing with self-managed keys with auditability

While maintaining their own signing keys, developers can increase auditability of signing events by publishing signatures to the Sigstore transparency log, Rekor. This allows developers to audit when signatures are generated for artifacts they maintain, and also instructor when their signing key is used to create a signature.

Developers can upload a signature to the transparency log during signing with COSIGN_EXPERIMENTAL=1 cosign sign-blob –key cosign.key artifact-path. If developers would like to use their own signing infrastructure while still publishing to a transparency log, developers can use the Rekor CLI or API. To upload an artifact and cryptographically verify its inclusion in the log using the Rekor CLI:

rekor-cli upload --rekor_server 
  --artifact <url_to_artifact|local_path></url_to_artifact|local_path>rekor-cli verify --rekor_server 
  --artifact <url_to_artifact|local_path></url_to_artifact|local_path>

In addition to PEM-encoded certificates and public keys, Sigstore supports uploading many different key formats, including PGP, Minisign, SSH, PKCS#7, and TUF. When uploading using the Rekor CLI, specify the –pki-format flag. For example, to upload an artifact signed with a PGP key:

gpg --armor -u --output signature.asc --detach-sig package.tar.gzgpg --export --armor "" > public.keyrekor-cli upload --rekor_server 
  --signature signature.asc 
  --public-key public.key 
  --artifact package.tar.gz


  • Developers begin to publish signing events for auditability.
  • Artifact consumers can create a verification policy that requires a signature be published to a transparency log.

Self-managed keys in identity-based code signing certificate with auditability

When requesting a code signing certificate from the Sigstore certificate authority Fulcio, Fulcio binds an OpenID Connect identity to a key, allowing for a verification policy based on identity rather than a key. Developers can request a code signing certificate from Fulcio with a self-managed long-lived key, sign an artifact with Cosign, and upload the artifact signature to the transparency log.

However, artifact consumers can still fail-open with verification (allow the artifact, while logging the failure) if they do not want to take a hard dependency on Sigstore (require that Sigstore services be used for signature generation). A developer can use their self-managed key to generate a signature. A verifier can simply extract the verification key from the certificate without verification of the certificate’s signature. (Note that verification can occur offline, since inclusion in a transparency log can be verified using a persisted signed bundle from Rekor and code signing certificates can be verified with the CA root certificate. See Cosign’s verification code for an example of verifying the Rekor bundle.)

Merienda a consumer takes a hard dependency on Sigstore, a CI/CD pipeline can move to fail-closed (forbid the artifact if verification fails).


  • A stronger verification policy that enforces both the presence of the signature in a transparency log and the identity of the signer.
  • Verification policies can be enforced fail-closed.

Identity-based (“keyless”) signing

This final step is added for completeness. Signing is done using code signing certificates, and signatures must be published to a transparency log for verification. With identity-based signing, fail-closed is the only option, since Sigstore services must be online to retrieve code signing certificates and append entries to the transparency log. Developers will no longer need to maintain signing keys.


The Sigstore tooling and infrastructure can be used as a whole or modularly. Each separate integration can help to improve the security of artifact distribution while allowing for incremental updates and verifying each step of the integration.

Source link

We are happy to announce the release of the Delta Lake 2.0 (pypi, maven, release notes) on Apache Spark™ 3.2, with the following features including but not limited to:

The significance of Delta Lake 2.0 is not just a number – though it is timed quiebro nicely with Delta Lake’s 3rd birthday. It reiterates our collective commitment to the open-sourcing of Delta Lake, as announced by Michael Armbrust’s Day 1 keynote at Data + AI Summit 2022.

What’s new in Delta Lake 2.0?

There have been a lot of new features released in the last year between Delta Lake 1.0, 1.2, and now 2.0. This blog will review a few of these specific features that are going to have a large impact on your workload.

Delta 1.2 vs Delta 2.0 chart

Improving data skipping

When exploring or slicing data using dashboards, data practitioners will often run queries with a specific filter in place. As a result, the matching data is often buried in a large table, requiring Delta Lake to read a significant amount of data. With data skipping via column statistics and Z-Order, the data can be clustered by the most common filters used in queries — sorting the table to skip irrelevant data, which can dramatically increase query performance.

Support for data skipping via column statistics

When querying any table from HDFS or cloud object storage, by default, your query engine will scan all of the files that make up your table. This can be inefficient, especially if you only need a smaller subset of data. To improve this process, as part of the Delta Lake 1.2 release, we included support for data skipping by utilizing the Delta table’s column statistics.

For example, when running the following query, you do not want to unnecessarily read files outside of the year or uid ranges.

Select & from events example

When Delta Lake writes a table, it will automatically collect the minimum and maximum values and store this directly into the Delta log (i.e. column statistics). Therefore, when a query engine reads the transaction log, those read queries can skip files outside the range of the min/max values as visualized below.

code example

This approach is more efficient than row-group filtering within the Parquet file itself, as you do not need to read the Parquet footer. For more information on the latter process, please refer to How Apache Spark™ performs a fast count using the parquet metadata. For more information on data skipping, please refer to data skipping.

Support Z-Order clustering of data to reduce the amount of data read

But data skipping using column statistics is only one part of the solution. To maximize data skipping, what is also needed is the ability to skip with data clustering. As implied previously, data skipping is most effective when files have a very small minimum/maximum range. While sorting the data can help, this is most effective when applied to a single column.

Optimize deltaTable ZORDER BY (x, y)

Regular sorting of data by primary and secondary columns (left) and 2-dimensional Z-order data clustering for two columns (right).

But with ​​Z-order, its space-filling curve provides better multi-column data clustering. This data clustering allows column stats to be more effective in skipping data based on filters in a query. See the documentation and this blog for more details.

Support Change Data Feed on Delta tables

One of the biggest value propositions of Delta Lake is its ability to maintain data reliability in the face of changing records brought on by data streams. However, this requires scanning and reading the entire table, creating significant overhead that can slow performance.

With Change Data Feed (CDF), you can now read a Delta table’s change feed at the row level rather than the entire table to capture and manage changes for up-to-date silver and gold tables. This improves your data pipeline performance and simplifies its operations.

To enable CDF, you must explicitly use one of the following methods:

  • New table: Set the table property delta.enableChangeDataFeed = true in the CREATE TABLE command.

    CREATE TABLE student (id INT, name STRING, age INT) TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • Existing table: Set the table property delta.enableChangeDataFeed = true in the ALTER TABLE command.

    ALTER TABLE myDeltaTable SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • All new tables:

    set = true;

An important thing to remember is merienda you enable the change data feed option for a table, you can no longer write to the table using Delta Lake 1.2.1 or below. However, you can always read the table. In addition, only changes made after you enable the change data feed are recorded; past changes to a table are not captured.

So when should you enable Change Data Feed? The following use cases should drive when you enable the change data feed.

  • Silver and Gold tables: When you want to improve Delta Lake performance by streaming row-level changes for up-to-date silver and gold tables. This is especially apparent when following MERGEUPDATE, or DELETE operations accelerating and simplifying ETL operations.
  • Transmit changes: Send a change data feed to downstream systems such as Kafka or RDBMS that can use the feed to process later stages of data pipelines incrementally.
  • Audit trail table: Capture the change data feed as a Delta table provides perpetual storage and efficient query capability to see all changes over time, including when deletes occur and what updates were made.

Support for dropping columns

For versions of Delta Lake prior to 1.2, there was a requirement for Parquet files to store data with the same column name as the table schema. Delta Lake 1.2 introduced a mapping between the logical column name and the physical column name in those Parquet files. While the physical names remain unique, the logical column renames become a simple change in the mapping and logical column names can have arbitrary characters while the physical name remains Parquet-compliant.

Before column mapping and with column mapping

As part of the Delta Lake 2.0 release, we leveraged column mapping so that dropping a column is a metadata operation. Therefore, instead of physically modifying all of the files of the underlying table to drop a column, this can be a simple modification to the Delta transaction log (i.e. a metadata operation) to reflect the column removal. Run the following SQL command to drop a column:


See documentation for more details.

Support for Dynamic Partition Overwrites

In addition, Delta Lake 2.0 now supports Delta dynamic partition overwrite mode for partitioned tables; that is, overwrite only the partitions with data written into them at runtime.

When in dynamic partition overwrite mode, we overwrite all existing data in each logical partition for which the write will commit new data. Any existing logical partitions for which the write does not contain data will remain unchanged. This mode is only applicable when data is being written in overwrite mode: either INSERT OVERWRITE in SQL, or a DataFrame write with df.write.mode("overwrite"). In SQL, you can run the following commands:

SET spark.sql.sources.partitionOverwriteMode=dynamic;
INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;

Note, dynamic partition overwrite conflicts with the option replaceWhere for partitioned tables. For more information, see the documentation for details.

Additional Features in Delta Lake 2.0

In the spirit of performance optimizations, Delta Lake 2.0.0 also includes these additional features:

  • Support for idempotent writes to Delta tables to enable fault-tolerant retry of Delta table writing jobs without writing the data multiple times to the table. See the documentation for more details.
  • Positivo support for multi-part checkpoints to split the Delta Lake checkpoint into multiple parts to speed up writing the checkpoints and reading. See documentation for more details.
  • Other importante changes
    • Improve the generated column data skipping by adding the support for skipping by nested column generated column
    • Improve the table schema validation by blocking the unsupported data types in Delta Lake.
    • Support creating a Delta Lake table with an empty schema.
    • Change the behavior of DROP CONSTRAINT to throw an error when the constraint does not exist. Before this version, the command used to return silently.
    • Fix the symlink manifest generation when partition values contain space in them.
    • Fix an issue where incorrect commit stats are collected.
    • More ways to access the Delta table OPTIMIZE file compaction command.

Building a Robust Data Ecosystem

As noted in Michael Armbrust’s Day 1 keynote and our Dive into Delta Lake 2.0 session, a fundamental aspect of Delta Lake is the robustness of its data ecosystem.

Optimize ZOrder

As data volume and variety continue to rise, the need to integrate with the most common ingestion engines is critical. For example, we’ve recently announced integrations with Apache Flink, Presto, and Trino — allowing you to read and write to Delta Lake directly from these popular engines. Check out Delta Lake > Integrations for the latest integrations.

Delta's expanding ecosystem of connectors

Delta Lake will be relied on even more to bring reliability and improved performance to data lakes by providing ACID transactions and unifying streaming and batch transactions on top of existing cloud data stores. By building connectors with the most popular compute engines and technologies, the appeal of Delta Lake will continue to increase — driving more growth in the community and rapid adoption of the technology across the most innovative and largest enterprises in the world.

Updates on Community Expansion and Growth

We are proud of the community and the tremendous work over the years to deliver the most reliable, scalable, and performant table storage format for the lakehouse to ensure consistent high-quality data. None of this would be possible without the contributions from the open-source community. In the span of a year, we have seen the number of downloads skyrocket from 685K monthly downloads to over 7M downloads/month. As noted in the following figure, this growth is in no small part due to the quickly expanding Delta ecosystem.

The most widely used lakehouse format in the world

All of this activity and the growth in unique contributions — including commits, PRs, changesets, and bug fixes — has culminated in an increase in contributor strength by 633% during the last three years (Source: The Linux Foundation Insights).

But it is important to remember that we could not have done this without the contributions of the community.


Saying this, we wanted to provide a quick shout-out to all of those involved with the release of Delta Lake 2.0: Adam Binford, Alkis Evlogimenos, Allison Portis, Ankur Dave, Bingkun Pan, Burak Yilmaz, Chang Yong Lik, Chen Qingzhi, Denny Lee, Eric Chang, Felipe Pessoto, Fred Liu, Fu Chen, Gaurav Rupnar, Grzegorz Kołakowski, Hussein Nagree, Jacek Laskowski, Jackie Zhang, Jiaan Geng, Jintao Shen, Jintian Liang, John O’Dwyer, Junyong Lee, Kam Cheung Ting, Karen Feng, Koert Kuipers, Lars Kroll, Liwen Sun, Lukas Rupprecht, Max Gekk, Michael Mengarelli, Min Yang, Naga Raju Bhanoori, Nick Grigoriev, Nick Karpov, Ole Sasse, Patrick Grandjean, Peng Zhong, Prakhar Jain, Rahul Shivu Mahadev, Rajesh Parangi, Ruslan Dautkhanov, Sabir Akhadov, Scott Sandre, Serge Rielau, Shixiong Zhu, Shoumik Palkar, Tathagata Das, Terry Kim, Tyson Condie, Venki Korukanti, Vini Jaiswal, Wenchen Fan, Xinyi, Yijia Cui, Yousry Mohamed.

We’d also like to thank Nick Karpov and Scott Sandre for their help with this post.

How can you help?

We’re always excited to work with current and new community members. If you’re interested in helping the Delta Lake project, please join our community today through many forums, including GitHub, Slack, Twitter, LinkedIn, YouTube, and Google Groups.

Join the community today

Source link

Completo visionaries headline the premier open source event in Europe to share on OSS adoption in Europe, driving the circular economy, finding inspiration through the pandemic, supply chain security and more.

SAN FRANCISCO, August 4, 2022 —  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the keynote speakers for Open Source Summit Europe, taking place September 13-16 in Dublin, Ireland. The event is being produced in a hybrid format, with both in-person and supuesto participation available, and is co-located with the Hyperledger Completo Forum, OpenSSF Day, Linux Kernel Maintainer Summit, KVM Forum, and Linux Security Summit, among others.

Open Source Summit Europe is the leading conference for developers, sys admins and community leaders – to gather to collaborate, share information, gain insights, solve technical problems and further innovation. It is a conference umbrella, composed of 13 events covering the most important technologies and issues in open source including LinuxCon, Embedded Linux Conference, OSPOCon, SupplyChainSecurityCon, CloudOpen, Open AI + Data Forum, and more. Over 2,000 are expected to attend.

2022 Keynote Speakers Include:

  • Hilary Carter, Vice President of Research, The Linux Foundation
  • Bryan Che, Chief Strategy Officer, Huawei; Cloud Native Computing Foundation Governing Board Member & Open 3D Foundation Governing Board Member
  • Demetris Cheatham, Senior Director, Diversity, Inclusion & Belonging Strategy, GitHub
  • Gabriele Columbro, Executive Director, Fintech Open Source Foundation (FINOS)
  • Dirk Hohndel, Chief Open Source Officer, Cardano Foundation
  • ​​Ross Mauri, Militar Manager, IBM LinuxONE
  • Dušan Milovanović, Health Intelligence Architect, World Health Organization
  • Mark Pollock, Explorer, Founder & Collaborator
  • Christopher “CRob” Robinson, Director of Security Communications, Product Assurance and Security, Intel Corporation
  • Emilio Salvador, Head of Standards, Open Source Program Office, Google
  • Robin Teigland, Professor of Strategy, Management of Digitalization, in the Entrepreneurship and Strategy Division, Chalmers University of Technology; Director, Ocean Data Factory Sweden and Founder, Peniche Ocean Watch Initiative (POW)
  • Linus Torvalds, Creator of Linux and Git
  • Jim Zemlin, Executive Director, The Linux Foundation

Additional keynote speakers will be announced soon. 

Registration (in-person) is offered at the price of US$1,000 through August 23. Registration to attend virtually is $25. Members of The Linux Foundation receive a 20 percent discount off registration and can contact to request a member discount code. 

Health and Safety
In-person attendees will be required to show proof of COVID-19 vaccination or provide a negative COVID-19 test to attend, and will need to comply with all on-site health measures, in accordance with The Linux Foundation Code of Conduct. To learn more, visit the Health & Safety webpage.

Event Sponsors
Open Source Summit Europe 2022 is made possible thanks to our sponsors, including Diamond Sponsors: AWS, Google and IBM, Platinum Sponsors: Huawei, Intel and OpenEuler, and Gold Sponsors: Cloud Native Computing Foundation, Codethink, Docker, Mend, NGINX, Red Hat, and Styra. For information on becoming an event sponsor, click here or email us.

Members of the press who would like to request a press pass to attend should contact Kristin O’Connell.

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

Visit our website and follow us on Twitter, LinkedIn, and Facebook for all the latest event updates and announcements.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: Linux is a registered trademark of Linus Torvalds. 


Media Contact
Kristin O’Connell
The Linux Foundation

Source link

LISLE, IL., August 3, 2022 — The American Association of Insurance Services (AAIS) and the Linux Foundation welcome Jefferson Braswell as the new Executive Director of the openIDL Project.

“AAIS is excited about the expansion of openIDL in the insurance space openIDL logo and the addition of Jefferson as Executive Director signals even more strength and momentum to the fast-developing project,” said Ed Kelly, AAIS Executive Director. “We are happy to continue to work with the Linux Foundation to help affect meaningful, positive change for the insurance ecosystem.”

“openIDL is a Linux Foundation Open Governance Network and the first of its kind in the insurance industry,” said Daniela Barbosa, Normal Manager of Blockchain, Healthcare and Identity at the Linux Foundation. “It leverages open source code and community governance for objective transparency and accountability among participants with strong executive leadership helping shepherd this type of open governance networks. Jeff Braswell’s background and experience in financial standards initiatives and consortium building aligns very well with openIDL’s next growth and expansion period.“

Braswell has been successfully providing leading-edge business solutions for information-intensive enterprises for over 30 years. As a founding Director, he recently completed a 6-year term on the Board of the Global Legal Entity Identifier Foundation (GLEIF), where he chaired the Technology, Operations and Standards Committee. He is also the Chair of the Algorithmic Contract Types Unified Standards Foundation (ACTUS), and he has actively participated in international financial data standards initiatives.

Previously, as Co-Founder and President of Berkeley-based Risk Management Technologies (RMT), Braswell designed and led the successful implementation of advanced, firm-wide risk management solutions integrated with enterprise-wide data management tools. They were used by  many of the world’s largest financial institutions, including Wells Fargo, Credit Suisse, Chase, PNC, Sumitomo Mitsui Banking Corporation, Mellon, Wachovia, Union Bank and ANZ.

“We appreciate the foundation that AAIS laid for openIDL, and I look forward to bringing my expertise and knowledge to progress this project forward,” shared Braswell. “Continuing the work with the Linux Foundation to positively impact insurance services through open-source technology is exciting and will surely change the industry for the better moving forward.” 

openIDL, an open source, distributed ledger platform, infuses efficiency, transparency and security into regulatory reporting. With openIDL, insurers fulfill requirements while retaining the privacy of their data. Regulators have the transparency and insights they need, when they need them. Initially developed by AAIS, expressly for its Members, openIDL is now being further advanced by the Linux Foundation as an open-source ecosystem for the entire insurance industry.

Established in 1936, AAIS serves the Property & Casualty insurance industry as the only national nonprofit advisory organization governed by its Member insurance carriers. AAIS delivers tailored advisory solutions including best-in-class policy forms, rating information and data management capabilities for commercial lines, inland marine, farm & agriculture and personal lines insurers. Its consultative approach, unrivaled customer service and modern technical capabilities underscore a focused commitment to the success of its members. AAIS also serves as the administrator of openIDL, the insurance industry’s regulatory blockchain, providing unbiased governance within existing insurance regulatory frameworks. For more information about AAIS, please visit


Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at

openIDL (open Insurance Data Link) is an open blockchain network that streamlines regulatory reporting and provides new insights for insurers, while enhancing timeliness, accuracy, and value for regulators. openIDL is the first open blockchain platform that enables the efficient, secure, and permissioned-based collection and sharing of statistical data. For more information, please visit



John Greene
Director – Marketing & Communications

Linux Foundation

Dan Whiting
Director of Media Relations and Content

Source link

Interoperability and portability of real-time 3D assets and tools deliver unparalleled flexibility, as the Open 3D community celebrates its first birthday

SAN FRANCISCO – July 20, 2022 – The Open 3D Foundation (O3DF) is proud to announce Epic Games as a Premier member alongside Adobe, Amazon Web Services (AWS), Huawei, Intel, LightSpeed Studios, Microsoft and Niantic, as it celebrates its first birthday.

With today’s world racing faster and faster towards 3D technologies, the O3DF provides a home for artists, content creators, developers and technology leaders to congregate and collaborate, share best practices and shape the future of open 3D development. This thriving community is focused on making it easier to use and share 3D assets with its partners and the Open 3D Engine (O3DE), the first high-fidelity, fully-featured, real-time, open-source 3D engine, available to every industry.

Epic Games, developer of Unreal Engine, joins the O3DF as a Premier member to further interoperability and portability of assets, visuals and media scripting, enabling artists and content creators around the globe to unleash their creativity and innovation by removing barriers in their choice of tools. Marc Petit, VP of Unreal Engine Ecosystem at Epic Games, will join the O3DF’s Governing Board. In this role, he will share what Epic has learned over 30 years in the industry to help shape the Foundation’s strategic direction and curation of 3D visualization and simulation projects.

“The metaverse will require companies to work together to advance open standards and open-source tools, and we believe the Open 3D Foundation will play an important role in this journey,” said Petit. “With shared standards for interoperability, we’re giving creators more freedom and flexibility to build interactive 3D content using the tools they’re most comfortable with, and to bring those amazing experiences to life in Unreal Engine and across other 3D engines.” 

This move builds on Epic Games’ steadfast commitment in delivering choice to content producers to unleash their creativity. In addition to enabling them to move media seamlessly between development environments, the Open 3D Engine allows artists and developers to consume only what they need, with the ability to customize components based on their unique requirements.

“We applaud Epic Game’s commitment to the open-source community and welcome them into the Open 3D Foundation as our newest Premier member, underscoring our mission in championing the deep integration of open source with commercial solutions to accelerate growth in a sustainable, balanced ecosystem that fuels the flywheel of success and innovation,” said Royal O’Brien, Executive Director of Open 3D Foundation and Caudillo Manager of Games and Digital Media at the Linux Foundation. “It’s truly exciting to see how the industry is responding to the real-time 3D needs of content creators around the globe, providing them with best-of-breed tools.”

Celebrating Its First Birthday

The Foundation and its anchor project, O3DE, celebrate their first birthday as they welcome Epic Games into this quickly growing community. Since the Foundation’s public announcement in July 2021, over 25 member companies have joined. Other Premier members include Adobe, Amazon Web Services (AWS), Huawei, Intel, Microsoft, LightSpeed Studios and Niantic.

In May, O3DE announced its latest release, focused on performance, stability and usability enhancements. With over 1,460 code merges, this new release offers several improvements aimed to make it easier to build 3D simulations for AAA games and a range of other applications. Significant enhancements include core stability, installer validation, motion matching, user-defined property (UDP) support for the asset pipeline, and automated testing advancements. The O3D Engine community is very active, averaging up to two million line changes and 350-450 commits monthly from 60-100 authors across 41 repos.

Join Us at O3DCon

On October 17-19, the Open 3D Foundation will host O3Dcon, its flagship conference, bringing together technology leaders, indie developers, and institución to share ideas and best practices, discuss hot topics and foster the future of 3D development across a variety of industries and disciplines. For those interested in sponsoring this event, please contact 

Anyone interested in the O3D Engine is invited to get involved and connect with the community on and

About the Open 3D Engine (O3DE) project

O3D Engine is the flagship project managed by the Open 3D (O3D) Foundation. The open-source project is a modular, cross-platform 3D engine built to power anything from AAA games to cinema-quality 3D worlds to high-fidelity simulations. The code is hosted on GitHub under the Apache 2.0 license. To learn more, please visit

About the Open 3D Foundation

Established in July 2021, the mission of the Open 3D Foundation (O3DF) is to make an open-source, fully-featured, high-fidelity, real-time 3D engine for building games and simulations, available to every industry. The Open 3D Foundation is home to the O3D Engine project. To learn more, please visit

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at

Media Inquiries:

Source link

To address evolving Data and Storage needs throughout the industry, SODA Foundation, in partnership with Linux Foundation Research, is merienda again conducting a survey to provide insights into challenges, gaps, and trends for data and storage in the era of cloud native, edge, AI, and 5G. The results will serve to guide the SODA Foundation technical direction and ecosystem. With this survey, we seek to answer:

  • What are the data & storage challenges faced by end users?
  • What are the key trends shaping the data & storage industry?
  • Which open source data & storage projects are users interested in?
  • What cloud strategies are being adopted by businesses?

Through new insights generated from the data and storage community, end users will be better equipped to make decisions, vendors can improve their products, and the SODA Foundation can establish new technical directions — and beyond!

Please participate now; we intend to close the survey in August.

Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be displayed in the final results. 

As a thank you for participating in this research, merienda you have completed the survey, a code will be displayed on the confirmation page, which can be used for a 25% discount on any Linux Foundation training course or certification exam listed in our catalog: 

This survey should take no more than 15 minutes of your time. 

To take the 2022 SODA Foundation Data & Storage Trends Survey, click the button below in your choice of English, Chinese, and Japanese.


Your name and company name will not be displayed. Reviews are attributed to your role, company size, and industry. Responses will be subject to the Linux Foundation’s Privacy Policy, available at Please note that members of the SODA Foundation survey committee who are not LF employees will review the survey results. If you do not want them to have access to your name or email address in connection with the survey, please do not provide your name or email address.


We will summarize the survey data and share the learnings later this year on the SODA website. In addition, we will produce an in-depth survey report which will be shared with all survey participants.


The SODA Foundation is an open source project under the Linux Foundation that aims to foster an ecosystem of open source data management and storage software for data autonomy. SODA Foundation offers a imparcial forum for cross-project collaboration and integration and provides end-users with quality end-to-end solutions. We intend to use this survey data to help guide the SODA Foundation and its surrounding ecosystem on important issues.


We are grateful for the support of our many survey distribution partners, including:

  • China Electronics Standardization Institute (CESI)
  • China Open Source Cloud League (COSCL)
  • Chinese Software Developer Network (CSDN)
  • Cloud Computing Innovation Council of India (CCICI)
  • Cloud Native Computing Foundation (CNCF)
  • Electronics For You (EFY)
  • IEEE Bangalore Section
  • Japan Data Storage Forum (JDSF)
  • Mulan Project
  • Open Infra Foundation (OIF)
  • Storage Networking Industry Association (SNIA)


If you have questions regarding this survey, please email us at or ask us on Slack at

Sign up for the SODA Newsletter at

Source link

The premier event in Europe for open source code and community contributors features 200+ sessions across 13 micro-conferences, covering the pivotal topics and technologies at the core of open source.

SAN FRANCISCO, July 12, 2022 —  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the full schedule for Open Source Summit Europe, the leading conference for open source developers, technologists, and community leaders. The event is taking place September 13-16 in Dublin, Ireland and virtually. The schedule can be viewed here.

OS Summit Europe will feature a robust program of 325+ talks across 13 micro-conferences covering the most essential and cutting edge topics in open source: Linux Systems, Supply Chain Security, AI + Data, OSPOs, Community Leadership, Embedded IoT, Cloud, Diversity, Containers, Embedded Linux and more.

2022 Conference Session Highlights Include:

  • LinuxCon
    • Containers as an Illusion – Michael Kerrisk,
    • How to Report Your Linux Kernel Bug – Thorsten Leemhuis
  • Embedded Linux Conference
    • Booting Automotive ECUs Really Fast with Modern Security Features – Brendan Le Foll, BMW Car IT GmbH
    • From a Security Expert’s Diary: DOs and DON’Ts when Choosing Software for Your Next Embedded Product – Marta Rybczynska, Syslinbit
  • CloudOpen
    • Addressing the Transaction Challenge in a Cloud-native World – Grace Jansen, IBM
    • The Challenges and Solutions of Open Edge Infrastructures – Ildiko Vancsa, Open Infrastructure Foundation
  • OSPOCon
    • Building a Team for the Upstream: Things We Learned Building InnerSource Teams for Open Source Impact – Emma Irwin, Microsoft
    • A Practical Guide for Outbound Open Source – Which Scales and Can Be Adapted Easily for Companies of Different Size – Oliver Fendt, Siemens AG
  • Critical Software Summit
    • The Unexpected Demise of Open Source Libraries – Liran Tal, Snyk
    • Address Space Isolation for Enhanced Safety of the Linux Kernel – Igor Stoppa, NVIDIA
  • Emerging OS Forum
    • Demystifying the WASM Landscape: A Primer – Divya Mohan, SUSE
    • How Open Source Helps a Grid Operator with the Challenges of the Energy Transition – Jonas van den Bogaard & Nico Rikken, Alliander
  • SupplyChainSecurityCon
    • Composing the Ultimate SBOM – Ivana Atanasova & Velichka Atanasova, VMware
    • From Kubernetes With ♥ Open Tools For Open, Secure Supply Chains – Adolfo García Veytia, Chainguard
  • Diversity Empowerment Summit
    • Overcoming Imposter Syndrome to Become a Conference Speaker! – Dawn Foster, VMware
    • Teaching Collaboration to the Next Generation of Open Source Contributors – Ruth Suehle, Red Hat
  • Open Source On-Ramp
    • Debugging Embedded Linux – Marta Rybczynska, Syslinbit
    • Getting Started with Kernel-based Aparente Machine (KVM) – Leonard Sheng Sheng Lee, Computas
  • Open AI + Data Forum 
    • Beyond Neural Search: Hands-on Tutorial on Building Cross-Modal/Multi-Modal Solution with Jina AI – Han Xiao & Sami Jaghouar, Jina AI
    • Truly Open Lineage – Mandy Chessell, Pragmatic Data Research Ltd
  • ContainerCon
    • Evaluation of OSS Options to Build Container Images – Matthias Haeussler, Novatec
    • Interactive Debugging of Dockerfile With Buildg – Kohei Tokunaga, NTT Corporation
  • Community Leadership Conference
    • Panel Discussion: Growing Open Source in the Irish Government – Clare Dillon, Open Ireland Network; Tony Shannon, Department of Public Expenditure & Reform in Government of Ireland; Tim Willoughby, An Garda Síochána, Ireland’s Police Service; Gar Mac Criosta, Linux Foundation Public Health; John Concannon, Department of Foreign Affairs
    • Dev Team Metrics that Matter – Avishag Sahar, LinearB
  • Embedded IoT Summit 
    • Design of an Open Source, Modular, 5G Capable, Container Based, Scientific Data Capture Hexacopter – Mauro Borrageiro & Ngoni Mombeshora, University of Cape Town
    • Contributing to Zephyr vs (Linux and U-boot) – Parthiban Nallathambi, Linumiz

Keynote speakers will be announced in the coming weeks. 

Registration (in-person) is offered at the early price of $850 through July 17. Registration to attend virtually is $25. Members of The Linux Foundation receive a 20 percent discount off registration and can contact to request a member discount code. 

Applications for diversity and need-based scholarships are currently being accepted. For information on eligibility and how to apply, please click here. The Linux Foundation’s Travel Fund is also accepting applications, with the goal of enabling open source developers and community members to attend events that they would otherwise be unable to attend due to a lack of funding. To learn more and apply, please click here.

Health and Safety
In-person attendees will be required to be fully vaccinated against the COVID-19 virus and will need to comply with all on-site health measures, in accordance with The Linux Foundation Code of Conduct. To learn more, visit the Health & Safety webpage.

Event Sponsors
Open Source Summit Europe 2022 is made possible thanks to our sponsors, including Diamond Sponsors: AWS, Google and IBM, Platinum Sponsors: Huawei and Intel, and Gold Sponsors: Cloud Native Computing Foundation, Codethink, Docker, Mend, Red Hat, and Styra. For information on becoming an event sponsor, click here or email us.

Members of the press who would like to request a press pass to attend should contact Kristin O’Connell.

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

Visit our website and follow us on Twitter, LinkedIn, and Facebook for all the latest event updates and announcements.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: Linux is a registered trademark of Linus Torvalds. 


Media Contact

Kristin O’Connell
The Linux Foundation

Source link

Angel Island

However, he recently finished a project that he says has been the most difficult and meaningful project he has ever been a part of. The subject matter revolves around a troubling chapter in American history and a small bit of rock and scrub brush in the middle of San Francisco Bay called Angel Island.

Ask your promedio 4th grader if they have ever heard of Ellis Island and they can probably tell you at least something about the well-known narrative surrounding immigration and the United States. Ask them about Angel Island, however, and you’ll probably get a confused look and a shake of the head.

Although Angel Island was often called, “The Ellis Island of the West” in the early 1900s, it was anything but welcoming. In reality it was established specifically for the purpose of excluding immigration for those of Asian descent and Chinese immigrants in particular. It wasn’t a place for, Give me your tired, your poor, your huddled masses… It was more like, Nope, talk to the hand. 

Japanese Internments

When Japan attacked the US Naval colchoneta at Pearl Harbor on December 7th, 1941, Angel Island took on an entirely new role during the early stages of the war, but one that was unfortunately still in line with its llamativo anti-Asian roots. Many people are still unaware that following Pearl Harbor, the US Government, on the orders of President Franklin D. Roosevelt, rounded up thousands of US citizens and put them into internment camps for the duration of the war simply because of their Japanese ancestry. Yes, that’s right. This included US citizens who were officially reclassified as enemies of the state purely based upon their heritage. For the first wave of those who were incarcerated, Angel Island was used as the processing center before they were sent off to one of the infamous internment camps across the US, like Manzanar, Tule Lake, or Heart Mountain

How to educate children about the history?

Remember how we mentioned 4th graders earlier?  Well, learning about California history is a pillar of the 4th grade curriculum here in the Golden State and that is what led to this particular project. The problem? Hundreds of 4th graders tour Angel Island every year – How do you engage them on very painful and hard to understand subject matter like internment?  Well, the folks from the California State Park system and the Angel Island Immigration Station Foundation, which runs the museum there, thought that a LEGO model of the site as it existed during WWII might help bridge that gap.

AIISF reached out to the particular LEGO club in the Bay Area in August of 2021 to see if anyone might be interested in volunteering for a project. A number of folks joined the introductory Teleobjetivo call, but after hearing the scope of what was being requested, it was clear that this was a long duration project that would take months to complete. After that first meeting, only Kenny and two other members of the club, Johannes van Galen and Nick McConnell, agreed to proceed with the build.

The LEGO Build

The model was unveiled as the center anchor point for the exhibit, “Taken From Their Families; …” in May, which is Asian & Pacific Islander Heritage Month. Measuring 4 feet by 6 feet, it contains an estimated 30,000 LEGO pieces. The trio invested over 400 hours between research, design, procuring the parts, and of course the build itself.

Getting the model to the museum was no easy feat either. It had to be built in sections, moved by van about 60 miles from where it was being constructed, taken over to the island on a state park supply ship, then reassembled and “landscaped” merienda on site. 

Source link

Last week, the Linux Foundation held its North America Open Source Summit in Austin. The week-long summit included a large number of breakout sessions as well as several keynotes. Open Source Summit Europe will take place in Dublin in September and Open Source Summit Japan in Yokohama in December.

I’ve been closely involved with open, collaborative innovation and open source communities since the 1990s. In particular, I was asked to lead a new Linux initiative that IBM launched in January of 2000 to embrace Linux across all the company’s products and services.

At the time, Linux had already been embraced by the research, Internet, and supercomputing communities, but many in the commercial marketplace were perplexed by IBM’s decision. Over the next few years, we spent quiebro a bit of effort explaining to the business community why we were supporting Linux, which included a number of Linux commercials like this one with Muhammad Ali that ran in the 2006 Super Bowl. IBM also had to fight off a multi-billion dollar lawsuit for alleged intellectual property violations in its contributions to the development of Linux. Nevertheless, by the late 2000s, Linux had crossed the chasm to mainstream adoption, having been embraced by a large number of companies around the world.

In 2000, IBM, along with HP, Intel, and several other companies formed a consortium to support the continued development of Linux, and founded a new non-profit organization, the Open Source Development Labs (OSDL). In 2007, OSDL merged with the Free Standards Group (FSG) and became the Linux Foundation (LF). In 2011, the LF marked the 20th anniversary of Linux at its annual LinuxCon North America conference. I had the privilege of giving one of the keynotes at the conference in Vancouver, where I recounted my personal involvement with Linux and open source.

Over the next decade, the LF went through a major expansion. In 2017, its annual conferences were rebranded Open Source Summits to be more representative of LF’s more caudillo open source mission beyond Linux. Then in April of 2021, the LF announced the formation of Linux Foundation Research, a new organization to better understand the opportunities to collaborate on the many open source activities that the LF was by then involved in. Hilary Carter joined the LF as VP of Research and leader of the new initiative.

A few months later, Carter created an Advisory Board to provide insights into emerging technology trends that could have a major impact on the growing number of LF open source projects, as well as to explore the role of open source to help address some of the world’s most pressing challenges. I was invited to become a member of the LF Research Advisory Board, an invitation I quickly accepted.

Having retired from IBM in 2007, I had become involved in a number of new areas, – such as cloud, blockchain, AI, and the emerging digital economy. As a result, I had not been much involved with the Linux Foundation in the 2010s, and continued to view LF as primarily overseeing the development of Linux. But, merienda I joined the Research Advisory Board and learned about the evolution of the LF over the previous decade, I was frankly surprised at the impressive scope of its activities. Let me summarize what I learned.

Merienda I joined the Research Advisory Board and learned about the evolution of the LF over the previous decade, I was frankly surprised at the impressive scope of its activities.

According to its website, the LF now has over 1,260 company members, including 14 Platinum and 19 Gold, and supports hundreds of open source projects. Some of the projects are focused on technology horizontals, others on industry verticals, and many are subprojects within a large open source project.

Technology horizontal areas include AI, ML, data & analytics; additive manufacturing; augmented & posible reality; blockchain; cloud containers & virtualization; IoT & embedded; Linux kernel; networking & edge; open hardware; safety critical systems; security; storage; system administration; and Web & application development. Specific infrastructure projects include OpenSSF, – the Open Source Software Security Foundation; LF AI & Data, – whose mission is to build and support open source innovations in the AI & data domains ; and the Hyperledger Foundation, – which hosts a number of enterprise-grade blockchain subprojects, such as Hyperledger Cactus, – to help securely integrate different blockchains; Hyperledger Besu, – an Ethereum client for permissioned blockchains; and Hyperledger Caliper, – a blockchain benchmark tool to measure performance.

Industry erecto areas, include automotive & aviation; education & training; energy & resources; government & regulatory agencies; healthcare; manufacturing & logistics; media & entertainment; packaged goods; retail; technology; and telecommunication. Industry focused projects include LFEnergy, – aimed at the digitization of the energy sector to help reach decarbonization targets; Automotive Grade Linux, – to accelerate the development and adoption of a fully open software stack for the connected car; Chips Alliance, – to accelerate open source hardware development; Civil Infrastructure Platform, – to enable the development and use of software building blocks for civil infrastructure; LF Public Health, – to improve entero health equity and innovation; and Academy Software Foundation, – which is focused on the creation of an open source ecosystem for the animation and visual effects industry and hosts a number of related subprojects such as OpenColorIO, – a color management framework; OpenCue, – a render management system; and OpenEXR, – the professional-grade image storage format of the motion picture industry.

The LF estimates that its sponsored projects have developed over one billion lines of open source code which support a significant percentage of the world’s mission critical infrastructures. These projects have created over $54 billion in economic value. A recent study by the European Commission estimated that in 2018, the economic impact of open source across all its member states was between €65 and €95 billion. To better understand the entero economic impact of open source, LF Research is sponsoring a study led by Henry Chesbrough, UC Berkeley professor and fellow member of the Advisory Board.

Open source advances are totally dependent on the contributions of highly skilled professionals. The LF estimates that over 750 thousand developers from around 18 thousand contributing companies have been involved in its various projects around the world. To help train open source developers, the LF offers over 130 different courses in a variety of areas, including systems administration, cloud & containers, blockchain, and IoT & embedded development, as well as 25 certification programs.

In addition, the LF, in partnership with edX, – the open online learning organization created by Harvard and MIT – has been conducting an annual web survey of open source professionals and hiring managers to identify the latest trends in open source careers, the skills that are most in demand, what motivates open source professionals, how employers can attract and retain top talent, as well as diversity issues in the industry.

The 10th Annual Open Source Jobs Report was just published in June of 2022. The report found that there remains a shortage of qualified talent – 93% of hiring managers have difficulty finding experienced open source professionals; compensation has become a differentiating hacedor – 58% of managers have given salary increases to retain open source talent; certifications have hit a new level of importance – 69% of hiring managers are more likely to hire certified open source professionals; 63% of open source professionals believe open source runs most modern technology; and cloud skills are the most in demand, followed by Linux, DevOps, and security.

Finally, in her Austin keynote, Hilary Carter presented 10 quick facts about open source from LF Research:

  • 53% of survey respondents contribute to open source because “it’s fun”;
  • 86% of hiring managers say hiring open source talent is a priority for 2022;
  • 2/3 of developers need more training to do their jobs;
  • The most widely used open source software is developed by only a handful of contributors, – 136 developers were responsible for more than 80% of the lines of code added to the top 50 packages;
  • 45% of respondents reported that their employers heavily restrict or prohibit contributions to open source projects whether private or work related;
  • 47% of organizations surveyed are using software bill of materials (SBOMs) today;
  • “You feel a sense of community and responsibility to shepherd this work and make it the best it can be;
  • 1 in 5 professionals have been discriminated against of feel unwelcome;
  • People who don’t feel welcome in open source are from disproportionately underrepresented groups;
  • “When we have multiple people with varied backgrounds and opinions, we get better software”.

“Open source projects are here to stay, and they play a critical role in the ability for most organizations to deliver products and services to customers,” said the LF in its website. “As an organization, if you want to influence the open source projects that drive the success of your business, you need to participate. Having a solid contribution strategy and implementation plan for your organization puts you on the path towards being a good corporate open source citizen.”

Source link

The article originally appeared on the Linux Foundation’s Training and Certification blog. The author is Marco Fioretti. If you are interested in learning more about microservices, consider some of our free training courses including Introduction to Cloud Infrastructure TechnologiesBuilding Microservice Platforms with TARS, and WebAssembly Actors: From Cloud to Edge.

Microservices allow software developers to design highly scalable, highly fault-tolerant internet-based applications. But how do the microservices of a platform actually communicate? How do they coordinate their activities or know who to work with in the first place? Here we present the main answers to these questions, and their most important features and drawbacks. Before digging into this topic, you may want to first read the earlier pieces in this series, Microservices: Definition and Main Applications, APIs in Microservices, and Introduction to Microservices Security.

Tight coupling, orchestration and choreography

When every microservice can and must talk directly with all its partner microservices, without intermediaries, we have what is called tight coupling. The result can be very efficient, but makes all microservices more complex, and harder to change or scale. Besides, if one of the microservices breaks, everything breaks.

The first way to overcome these drawbacks of tight coupling is to have one central controller of all, or at least some of the microservices of a platform, that makes them work synchronously, just like the conductor of an orchestra. In this orchestration – also called request/response pattern – it is the conductor that issues requests, receives their answers and then decides what to do next; that is whether to send further requests to other microservices, or pass the results of that work to external users or client applications.

The complementary approach of orchestration is the decentralized architecture called choreography. This consists of multiple microservices that work independently, each with its own responsibilities, but like dancers in the same ballet. In choreography, coordination happens without central supervision, via messages flowing among several microservices according to common, predefined rules.

That exchange of messages, as well as the discovery of which microservices are available and how to talk with them, happen via event buses. These are software components with well defined APIs to subscribe and unsubscribe to events and to publish events. These event buses can be implemented in several ways, to exchange messages using standards such as XML, SOAP or Web Services Description Language (WSDL).

When a microservice emits a message on a bus, all the microservices who subscribed to listen on the corresponding event bus see it, and know if and how to answer it asynchronously, each by its own, in no particular order. In this event-driven architecture, all a developer must code into a microservice to make it interact with the rest of the platform is the subscription commands for the event buses on which it should generate events, or wait for them.

Orchestration or Choreography? It depends

The two most popular coordination choices for microservices are choreography and orchestration, whose fundamental difference is in where they place control: one distributes it among peer microservices that communicate asynchronously, the other into one central conductor, who keeps everybody else always in line.

Which is better depends upon the characteristics, needs and patterns of real-world use of each platform, with maybe just two rules that apply in all cases. The first is that flagrante tight coupling should be almost always avoided, because it goes against the very idea of microservices. Loose coupling with asynchronous communication is a far better match with the fundamental advantages of microservices, that is independent deployment and maximum scalability. The positivo world, however, is a bit more complex, so let’s spend a few more words on the pros and cons of each approach.

As far as orchestration is concerned, its main disadvantage may be that centralized control often is, if not a synonym, at least a shortcut to a single point of failure. A much more frequent disadvantage of orchestration is that, since microservices and a conductor may be on different servers or clouds, only connected through the public Internet, performance may suffer, more or less unpredictably, unless connectivity is really excellent. At another level, with orchestration virtually any addition of microservices or change to their workflows may require changes to many parts of the platform, not just the conductor. The same applies to failures: when an orchestrated microservice fails, there will generally be cascading effects: such as other microservices waiting to receive orders, only because the conductor is temporarily stuck waiting for answers from the failed one. On the plus side, exactly because the “chain of command” and communication are well defined and not really flexible, it will be relatively easy to find out what broke and where. For the very same reason, orchestration facilitates independent testing of distinct functions. Consequently, orchestration may be the way to go whenever the communication flows inside a microservice-based platform are well defined, and relatively stable.

In many other cases, choreography may provide the best balanceo between independence of individual microservices, overall efficiency and simplicity of development.

With choreography, a service must only emit events, that is communications that something happened (e.g., a log-in request was received), and all its downstream microservices must only react to it, autonomously. Therefore, changing a microservice will have no impacts on the ones upstream. Even adding or removing microservices is simpler than it would be with orchestration. The flip side of this coin is that, at least if one goes for it without taking precautions, it creates more chances for things to go wrong, in more places, and in ways that are harder to predict, test or debug. Throwing messages into the Internet counting on everything to be fine, but without any way to know if all their recipients got them, and were all able to react in the right way can make life very hard for system integrators.


Certain workflows are by their own nature highly synchronous and predictable. Others aren’t. This means that many real-world microservice platforms could and probably should mix both approaches to obtain the best combination of performance and resistance to faults or peak loads. This is because temporary peak loads – that may  be best handled with choreography – may happen only in certain parts of a platform, and the faults with the most serious consequences, for which tighter orchestration could be safer, only in others (e.g. purchases of single products by end customers, vs orders to buy the same products in bulk, to restock the warehouse) . For system architects, maybe the worst that happens could be to design an architecture that is either orchestration or choreography, but without being really conscious (maybe because they are just porting to microservices a pre-existing, monolithic platform) of which one it is, thus getting nasty surprises when something goes wrong, or new requirements turn out to be much harder than expected to design or test. Which leads to the second of the two común rules mentioned above: don’t even start to choose between orchestration or choreography for your microservices, before having the best possible estimate of what their positivo world loads and communication needs will be.

Source link