The article originally appeared on the Linux Foundation’s Training and Certification blog. The author is Marco Fioretti. If you are interested in learning more about microservices, consider some of our free training courses including Introduction to Cloud Infrastructure TechnologiesBuilding Microservice Platforms with TARS, and WebAssembly Actors: From Cloud to Edge.

Microservices allow software developers to design highly scalable, highly fault-tolerant internet-based applications. But how do the microservices of a platform actually communicate? How do they coordinate their activities or know who to work with in the first place? Here we present the main answers to these questions, and their most important features and drawbacks. Before digging into this topic, you may want to first read the earlier pieces in this series, Microservices: Definition and Main Applications, APIs in Microservices, and Introduction to Microservices Security.

Tight coupling, orchestration and choreography

When every microservice can and must talk directly with all its partner microservices, without intermediaries, we have what is called tight coupling. The result can be very efficient, but makes all microservices more complex, and harder to change or scale. Besides, if one of the microservices breaks, everything breaks.

The first way to overcome these drawbacks of tight coupling is to have one central controller of all, or at least some of the microservices of a platform, that makes them work synchronously, just like the conductor of an orchestra. In this orchestration – also called request/response pattern – it is the conductor that issues requests, receives their answers and then decides what to do next; that is whether to send further requests to other microservices, or pass the results of that work to external users or client applications.

The complementary approach of orchestration is the decentralized architecture called choreography. This consists of multiple microservices that work independently, each with its own responsibilities, but like dancers in the same ballet. In choreography, coordination happens without central supervision, via messages flowing among several microservices according to common, predefined rules.

That exchange of messages, as well as the discovery of which microservices are available and how to talk with them, happen via event buses. These are software components with well defined APIs to subscribe and unsubscribe to events and to publish events. These event buses can be implemented in several ways, to exchange messages using standards such as XML, SOAP or Web Services Description Language (WSDL).

When a microservice emits a message on a bus, all the microservices who subscribed to listen on the corresponding event bus see it, and know if and how to answer it asynchronously, each by its own, in no particular order. In this event-driven architecture, all a developer must code into a microservice to make it interact with the rest of the platform is the subscription commands for the event buses on which it should generate events, or wait for them.

Orchestration or Choreography? It depends

The two most popular coordination choices for microservices are choreography and orchestration, whose fundamental difference is in where they place control: one distributes it among peer microservices that communicate asynchronously, the other into one central conductor, who keeps everybody else always in line.

Which is better depends upon the characteristics, needs and patterns of real-world use of each platform, with maybe just two rules that apply in all cases. The first is that flagrante tight coupling should be almost always avoided, because it goes against the very idea of microservices. Loose coupling with asynchronous communication is a far better match with the fundamental advantages of microservices, that is independent deployment and maximum scalability. The positivo world, however, is a bit more complex, so let’s spend a few more words on the pros and cons of each approach.

As far as orchestration is concerned, its main disadvantage may be that centralized control often is, if not a synonym, at least a shortcut to a single point of failure. A much more frequent disadvantage of orchestration is that, since microservices and a conductor may be on different servers or clouds, only connected through the public Internet, performance may suffer, more or less unpredictably, unless connectivity is really excellent. At another level, with orchestration virtually any addition of microservices or change to their workflows may require changes to many parts of the platform, not just the conductor. The same applies to failures: when an orchestrated microservice fails, there will generally be cascading effects: such as other microservices waiting to receive orders, only because the conductor is temporarily stuck waiting for answers from the failed one. On the plus side, exactly because the “chain of command” and communication are well defined and not really flexible, it will be relatively easy to find out what broke and where. For the very same reason, orchestration facilitates independent testing of distinct functions. Consequently, orchestration may be the way to go whenever the communication flows inside a microservice-based platform are well defined, and relatively stable.

In many other cases, choreography may provide the best balanceo between independence of individual microservices, overall efficiency and simplicity of development.

With choreography, a service must only emit events, that is communications that something happened (e.g., a log-in request was received), and all its downstream microservices must only react to it, autonomously. Therefore, changing a microservice will have no impacts on the ones upstream. Even adding or removing microservices is simpler than it would be with orchestration. The flip side of this coin is that, at least if one goes for it without taking precautions, it creates more chances for things to go wrong, in more places, and in ways that are harder to predict, test or debug. Throwing messages into the Internet counting on everything to be fine, but without any way to know if all their recipients got them, and were all able to react in the right way can make life very hard for system integrators.

Conclusion

Certain workflows are by their own nature highly synchronous and predictable. Others aren’t. This means that many real-world microservice platforms could and probably should mix both approaches to obtain the best combination of performance and resistance to faults or peak loads. This is because temporary peak loads – that may  be best handled with choreography – may happen only in certain parts of a platform, and the faults with the most serious consequences, for which tighter orchestration could be safer, only in others (e.g. purchases of single products by end customers, vs orders to buy the same products in bulk, to restock the warehouse) . For system architects, maybe the worst that happens could be to design an architecture that is either orchestration or choreography, but without being really conscious (maybe because they are just porting to microservices a pre-existing, monolithic platform) of which one it is, thus getting nasty surprises when something goes wrong, or new requirements turn out to be much harder than expected to design or test. Which leads to the second of the two común rules mentioned above: don’t even start to choose between orchestration or choreography for your microservices, before having the best possible estimate of what their positivo world loads and communication needs will be.



Source link


One of the first projects I noticed after starting at the Linux Foundation was AgStack. It caught my attention because I have a natural inclination towards farming and ranching, although, in reality, I really just want a reason to own and use a John Deere tractor (or more than one). The reality is the closest I will ever get to being a farmer is my backyard garden with, perhaps, some chickens one day. But I did work in agriculture policy for a number of years, including some time at USDA’s Natural Resources Conservation Service. So, AgStack piqued my interest. Most people don’t really understand where their food comes from, the challenges that exist across the globe, and the innovation that is still possible in agriculture. It is encouraging to see the passion and innovation coming from the folks at AgStack.

Speaking of that, I want to dig into (pun intended) one of AgStacks’ projects, Ag-Rec.

Backing up a bit, in the United States, the U.S. Department of Agriculture operates a vast network of cooperative extension offices to help farmers, ranchers, and even gardeners improve their practices. They have proven themselves to be invaluable resources and are credited with improving agriculture practices both here in the U.S. and around the globe through research, information sharing, and partnerships. Even if you aren’t a farmer, they can help you with your garden, lawn, and more. Give them a call – almost every county has an office.

The reality with extension education is that it is still heavily reliant on individuals going to offices and reading printed materials or PDFs. It could use an upgrade to help the data be more easily digestible, to make it quicker to update, to expand the information available, and to facilitate information sharing around the world. Enter Ag-Rec. 

I listened to Brandy Byrd and Gaurav Ramakrishna, both with IBM, present about Ag-Rec at the Open Source Summit 2022 in Austin, Texas. 

Coñac is a native of rural South Carolina, raised in an area where everyone farmed. She recalled some words of wisdom her granddaddy always said, “Never sell the goose that laid the golden egg.” He was referring to the value of the farmland – it was their livelihood. She grew up seeing firsthand the value of farms, and she was already allegado with the value of the information from the extension service and of information sharing among farmers and ranchers beyond mornings at the lugar coffee shop. But she also sees a better way. 

The vision of Ag-Rec is a framework where rural farmers from small SC towns to anywhere in the world have the same cooperative extension framework where they can get info, advice, and community. They don’t have to go to an office or have a physical manual. They can access a wealth of information and that can be shared anywhere, anytime. 

On top of that, by making it open source, anyone can use the framework so anyone can build applications and make the data available in new and useful ways. Ag-Rec is providing the pulvínulo for even more innovation. Imagine the innovation we don’t know is possible. 

The Roadmap

Coñac and Gaurav shared about how Ag-Rec is being built and how developers, UI experts, agriculture practices experts, end users, and others can help contribute. When the recording of the presentation is available we will share that here. You can also go over to Ag-Rec’s GitHub for more information and to help. 

Here is the current roadmap: 

Immediate

  • Design and development of UI with Mojoe.net
  • Plant data validation and enhancements
  • Gather requirements to provision additional Extensive Service recommendation data
  • Integrate User Registry for authentication and authorization

Mid-term

  • Testing and feedback from stakeholders
  • Deploy the solution on AgStack Cloud
  • Add documentation for external contribution and self-deployment

Long-term

  • Invite other Extension Services and communities
  • Iterate and continuous improvement

I, for one, am excited about the possibility of this program to help improve crop production, agricultural-land conservation, pest management, and more around the world. Farms feed the world, fuel economies, and so much more. With even better practices, their positive impact can be even greater while helping conserve the earth’s resources. 

The Partners

In May 2021, the Linux Foundation launched the AgStack Foundation to “build and sustain the integral data infrastructure for food and agriculture to help scale digital transformation and address climate change, rural engagement, and food and water security.”  Not long after, IBM, Call for Code and Clemson University Cooperative Extension “sought to digitize data that’s been collected over the years, making it accessible to anyone on their phone or computer to search data and find answers they need.” AgStack “way to collaborate with and gain insights from a community of people working on similar ideas, and this helped the team make progress quickly.” And Ag-Rec was born. 

A special thank you to the core team cultivating (pun intended) this innovation: 

Resources



Source link


New features bringing unmatched query performance to open data lakehouses

Today, the Delta Lake project announced the Delta Lake 2.0 release candidate, which includes a Delta Lake logo collection of new features with vast performance and usability improvements. The final release of Delta Lake 2.0 will be made available later this year.

Delta Lake has been a Linux Foundation project since October 2019 and is the open storage layer that brings reliability and performance to data lakes via the “lakehouse architectures”, the best of both data warehouses and data lakes under one roof. In the past three years, lakehouses have become an appealing solution to data engineers, analysts, and data scientists who want to have the flexibility to run different workloads on the same data with minimal complexity and no duplication – from data analysis to the development of machine learning models. Delta Lake is the most widely-used lakehouse format in the word and currently sees over 7M downloads per month (and continues to grow).

Delta Lake 2.0 will bring some major improvements to query performance for Delta Lake users, such as support for change data feed, Z-order clustering, idempotent writes to Delta tables, column dropping, and many more (get more details in the Delta Lake 2.0 RC release notes). This enables any organization to build highly performant lakehouses for a wide range of data and AI use cases.

The announcement of Delta Lake 2.0 came on stage during Data + AI Summit 2022 keynote as Michael Armbrust, distinguished engineer at Databricks and a co-founder of the Delta Lake project, showed how the new features will dramatically improve performance and manageability compared to previous versions and other storage formats. Databricks had initially open sourced Delta Lake and has, with the Delta Lake community, been continuously contributing new features to the project. The latest set of features included in v2.0 have been first made available to Databricks customers, ensuring they are “battle-tested” for production workloads before being contributed to the project.

Databricks is not the only organization actively contributing to Delta Lake – developers from over 70 different organizations have been collaborating and contributing new features and capabilities.

“The Delta Lake project is seeing phenomenal activity and growth trends indicating the developer community wants to be a part of the project. Contributor strength has increased by 60% during the last year and the growth in total commits is up 95% and the promedio line of code per commit is up 900%. We are seeing this upward velocity from contributing organizations like Uber Technologies, Walmart, and CloudBees, Inc., among others,” 

— Executive Director of the Linux Foundation, Jim Zemlin. 

The Delta Lake community is inviting you to explore Delta Lake and join the community. Here are a few useful links to get you started:



Source link


At last week’’s Open Source Summit North America, Robin Ginn, Executive Director of the OpenJS Foundation, relayed a principle her mentor taught: “1+1=3”. No, this isn’t ‘new math,’ it is demonstrating the principle that, working together, we are more impactful than working apart. Or, as my wife and I say all of the time, teamwork makes the dream work. 

This principle is really at the core of open source technology. Turns out it is also how I look at the Open Programmable Infrastructure project. 

Stepping back a bit, as “the new guy” around here, I am still constantly running across projects where I want to dig in more and understand what it does, how it does it, and why it is important. I had that very thought last week as we launched another new project, the Open Programmable Infrastructure Project. As I was reading up on it, they talked a lot about data processing units (DPUs) and infrastructure processing units (IPUs), and I thought, I need to know what these are and why they matter. In the timeless words of The Bobs, “What exactly is it you do here?” 

What are DPUs/IPUs? 

First – and this is important – they are basically the same thing, they just have different names. Here is my oversimplified explanation of what they do.

In most personal computers, you have a separate graphic processing unit(s) that helps the central 1 + 1 = 3 processing unit(s) (CPU) handle the tasks related to processing and displaying the graphics. They offload that work from the CPU, allowing it to spend more time on the tasks it does best. So, working together, they can achieve more than each can separately. 

Servers powering the cloud also have CPUs, but they have other tasks that can consume tremendous computing  power, say data encryption or network packet management. Offloading these tasks to separate processors enhances the performance of the whole system, as each processor focuses on what it does best. 

In order words, 1+1=3. 

DPUs/IPUs are highly customizable

While separate processing units have been around for some time, like your PC’s GPU, their functionally was primarily dedicated to a particular task. Instead, DPUs/IPUs combine multiple offload capabilities that are highly  customizable through software. That means a hardware manufacturer can ship these units out and each organization uses software to configure the units according to their specific needs. And, they can do this on the fly. 

Core to the cloud and its continued advancement and growth is the ability to quickly and easily create and dispose of the “hardware” you need. It wasn’t too long ago that if you wanted a server, you spent thousands of dollars on one and built all kinds of infrastructure around it and hoped it was what you needed for the time. Now, pretty much anyone can quickly setup a supuesto server in a matter of minutes for virtually no initial cost. 

DPUs/IPUs bring this same type of flexibility to your own datacenter because they can be configured to be “specialized” with software rather than having to literally design and build a different server every time you need a different capability. 

What is Open Programmable Infrastructure (OPI)?

OPI is focused on utilizing  open software and standards, as well as frameworks and toolkits, to allow for the rapid adoption and use of DPUs/IPUs. The OPI Project is both hardware and software companies coming together to establish and nurture an ecosystem to support these solutions. It “seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.”

In other words, competitors are coming together to agree on a common, open ecosystem they can build together and innovate, separately, on top of. The are living out 1+1=3.

I, for one, can’t wait to see the innovation.



Source link


In a new white paper, the Cardea Project at Linux Foundation Public Health demonstrates a complete, decentralized, open source system for sharing medical data in a privacy-preserving way with machine readable governance for establishing trust.

The Cardea Project began as a response to the general Covid-19 pandemic and the need for countries and airlines to admit travelers. As Covid shut down air travel and presented an existential threat to countries whose economies depended on tourism, SITA Aero, the largest provider of IT technology to the air transport sector, saw decentralized identity technology as the ideal solution to manage a proof of Covid test status for travel.

With a verifiable credential, a traveler could hold their health data and not only prove they had a specific test at a specific time, they could use it—or a derivative credential—to prove their test status to enter hotels and hospitality spaces without having to divulge any personal information. Entities that needed to verify a traveler’s test status could, in turn, avoid the complexity of direct integrations with healthcare providers and the challenge of complying with onerous health data privacy law.

Developed by Indicio with SITA and the government of Aruba, the technology was successfully trialed in 2021 and the code specifically developed for the project was donated to Linux Foundation Public Health (LFPH) as a way for any public health authority to implement an open source, privacy-preserving way to manage Covid test and vaccination data. The Cardea codebase continues to develop at LFPH as Indicio, SITA, and the Cardea Community Group extend its features and applications beyond Covid-related data.

On May 22, 2022 at the 15th KuppingerCole European Identity and Cloud Conference in Berlin, SITA won the Verifiable Credentials and Decentralized Identity Award for its implementation of decentralized identity in Aruba.

The new white paper from the Cardea Project provides an in-depth examination of the background to Cardea, the transformational power of decentralized identity technology, how it works, the implementation in Aruba, and how it can be deployed to authenticate and share multiple kinds of health data in privacy-preserving ways. As the white paper notes:

“…Cardea is more than a solution for managing COVID-19 testing; it is a way to manage any health-related process where critical and personal information needs to be shared and verified in a way that enables privacy and enhances security. It is able to meet the requirements of the 21st Century Cures Act and Europe’s Militar Data Protection Regulation, and in doing so enable use cases that range from simple proof of identity to interoperating ecosystems encompassing multiple cloud services, organizations, and sectors, where data needs to be, and can be, shared in immediately actionable ways.

Open source, interoperable decentralized identity technology is the only viable way to manage both the challenges of the present—where entire health systems can be held at ransom through identity-based breaches—and the opportunities presented by a digital future where digital twins, smart hospitals, and spatial web applications will reshape how healthcare is managed and delivered.”

The white paper is available here. The community development group meets weekly on Thursdays at 9:00am PST—please join us!

This article was originally published on the Linux Foundation Public Health project’s blog



Source link


So, I am old enough to remember when the U.S. Congress temporarily intervened in a patent dispute over the technology that powered BlackBerries. A U.S. Federal judge ordered the BlackBerry service to shutdown until the matter was resolved, and Congress determined that BlackBerry service was too integral to commerce to be allowed to be turned off. Eventually, RIM settled the patent dispute and the BlackBerry rode off into technology oblivion

I am not here to argue the merits of this nearly 20-year-old case (in fact, I coincidentally had friends on both reglamentario teams), but it was when I was introduced to the idea of companies that purchase patents with the goal of using this purchased right to extract money from other companies. 

Patents are an important reglamentario protection to foster innovation, but, like all systems, it isn’t perfect. 

At this week’s  Open Source Summit North America, we heard from Kevin Jakel with Unified Patents. Kevin is a patent attorney who saw the damage being done to innovation by patent trolls – more kindly known as non-practicing entities (NPEs). 

Kevin points out that patents are intellectual property designed to protect inventions, granting a time-bound reglamentario monopoly, but they are only a sword, not a shield. You can use it to stop people, but it doesn’t give you a right to do anything. He emphasizes, “You are pusilánime even if you invented something. Someone can come at you with other patents.” 

Kevin has watched a whole industry develop where patents are purchased by other entities, who then three steps of stopping patent trolls go after successful individuals or companies who they claim are infringing on the patents they now legally own (but is not something they invented). In fact, 88% of all high-tech patent litigation is from an NPE.

NPEs are rational actors using the reglamentario system to their advantage, and they are driven by the fact that almost all of the time the defendant decides to settle to avoid the costs of defending the litigation. This perpetuates the problem by both reducing the risk to the NPEs and also giving them funds to purchase additional patents for future campaigns. 

In regards to open source software, the problem is on the rise and is only going to get worse without strategic, consistent action to combat it.

patent trolls open source cases by year chart

Kevin started Unified Patents with the goal of solving this problem without incentivizing further NPE activity. He wants to increase the risk for NPEs so that they are incentivized to not pursue non-existent claims. Because NPEs are rational actors, they are going to weigh risks vs. rewards before making any decisions. 

How does Unified Patents do this? They use a three-step process: 

  • Detect – Patent Troll Campaigns
  • Disrupt – Patent Troll Assertions
  • Deter – Further Patent Troll Investment 

Unified Patents works on behalf of 11 technology areas (they call them Zones). They added an Open Source Zone in 2019 with the help of the Linux Foundation, Open Invention Network, and Microsoft. They look for demands being filed in court, and then they selectively pick patent trolls out of the group and challenge them, attempting to disrupt the process. They take the patent back to the U.S. Patent and Trademark Office and see if the patent should have ever existed in the first place. Typically, patent trolls look for broad patents so they can sue lots of companies, making their investment more profitable and less risky. This means it is so broad that it probably should never have been awarded in the first place. 

The result – they end up killing a lot of patents that should have never been issued but are being exploited by patent trolls, stifling innovation. The goal is to slow them down and eventually bring them to a stop as quickly as they can. Then, the next time they go to look for a patent, they look somewhere else.

And it is working. The image below shows some of the open source projects that Unified Patents has actively protected since 2019.

open source tech logos

The Linux Foundation participates in Unified Patents’ Open Source Zone to help protect the individuals and organizations innovating every day. We encourage you to join the fight and create a true deterrence for patent trolls. It is the only way to extinguish this threat. 

Learn more at unifiedpatents.com/join

And if you are a die-hard fan of the BlackBerry’s iconic keyboard, my apologies for dredging up the painful memory of your loss. 



Source link


Tomorrow night, in the skies over Congress Bridge in Austin, Texas, 300 drones will work in concert to provide a lightshow to entertain but also inform about the power of open source software to drive innovation in our world, making an impact in every life, every day.

Backing up a bit, open source software often conjures up inaccurate visions and presumptions that just aren’t true. No need to conjure those up – we all know what they are. The reality is that open source software (OSS) has transformed our world and become the backbone of our digital economy and the foundation of our digital world. 

The reality is that open source software (OSS) has transformed our world and become the backbone of our digital economy and the foundation of our digital world. 

Some quick, fun facts

  • In erecto software stacks across industries, open source penetration ranges from 20 to 85 percent of the overall software used
  • Linux fuels 90%+ of web servers and Internet-connected devices
  • The Android mobile operating system is built on the Linux kernel
  • Immensely popular libraries and tools to build web applications, such as: AMP, Appium, Dojo, jQuery, Marko, Node.js and so many more are open source
  • The world’s top 100 supercomputers run Linux
  • 100% of mainframe customers use Linux
  • The major cloud-service providers – AWS, Google, and Microsoft – all utilize open-source software to run their services and host open-source solutions delivered through the cloud

Open source software is about organizations coming together to collectively solve common problems so they can separately innovate and differentiate on top of the common baseline. They see they are better off pooling resources to make the baseline better. Sometimes it is called “coopetition.” It generally means that while companies may be in competition with each other in certain areas, they can still cooperate on others.

I borrowed from a well-known tagline from my childhood in the headline – open source does bring good things to life. 

Fueling Drone Innovation 

Drones were introduced to the world through military applications and then toys we could all easily fly (well, my personal track record is abysmal). But the reality is that drones are seeing a variety of commercial applications, such as energy facility inspection for oil, gas, and solar, search and rescue, firefighting, and more, with new uses coming online all of the time. We aren’t at The Jetsons level yet, but they are making our lives easier and safer (and some really cool aerial shots).

Much of that innovation comes from open source coopetition. 

The Linux Foundation hosts the Dronecode Foundation, which fosters open source code and standards critical to the worldwide drone industry. In a recent blog post, the caudillo manager, Ramón Roche, discusses some of the ways open source has created an ecosystem of interoperability,  which leads to users having more choice and flexibility. 

Building the Foundation

Ramón recounts how it all started with the creation of Pixhawk, open standards for drone hardware, with the goal to make drones fly autonomously using computer vision. Working to overcome the lack of computing power and technology in 2008, Lorenz Meier, then a student, set out to build the necessary flight control software and hardware. Realizing the task’s scale, he sought the help of fourteen fellow students, many of whom were more experienced than him, to make it happen. They built Pixhawk and kick started an open source community around various technologies. It, “enabled talented people worldwide to collaborate and create a full-scale solution that was reusable and standardized. By giving their technology a permissive open source license, they opened it to everyone for use and collaboration.”

Benefits of Openness in the Vivo World

The innovation and technological backbone we see in drones is thanks to open software, hardware, and standards. Dronecode’s blog has interviews with Max Tubman of Freefly Systems talks about how open standards are enabling interoperability of various payloads amongst partners in the Open Ecosystem. Also, Bobby Watts of Watts Innovation explains the power of standardization and how it has streamlined their interoperability with other ecosystem partners like Gremsy and Drone Rescue Systems.

The innovation and technological backbone we see in drones is thanks to open software, hardware, and standards

Check out both interviews here and read about what is next.

The story of open source driving innovation in the drone industry is just one of thousands of examples of how open source is driving total innovation. Whether you know it or not, you use open source software every minute of every hour of every day.



Source link


Many software projects are not prepared to build securely by default, which is why the Linux Foundation and Open Source Security Foundation (OpenSSF) partnered with technology industry leaders to create Sigstore, a set of tools and a standard for signing, verifying and protecting software. Sigstore is one of several innovative technologies that have emerged to improve the integrity of the software supply chain, reducing the friction developers face in implementing security within their daily work.

To make it easier to use Sigstore’s toolkit to its full potential, OpenSSF and Linux Foundation Training & Certification are releasing a free online training course, Securing Your Software Supply Chain with Sigstore (LFS182x). This course is designed with end users of Sigstore tooling in mind: software developers, DevOps engineers, security engineers, software maintainers, and related roles. To make the best use of this course, you will need to be íntimo with Linux terminals and using command line tools. You will also need to have intermediate knowledge of cloud computing and DevOps concepts, such as using and building containers and CI/CD systems like GitHub Actions, many of which can be learned through other free Linux Foundation Training & Certification courses.

Upon completing this course, participants will be able to inform their organization’s security strategy and build software more securely by default. The hope is this will help you address attacks and vulnerabilities that can emerge at any step of the software supply chain, from writing to packaging and distributing software to end users.

Enroll today and improve your organization’s software development cybersecurity best practices.



Source link


The tenth annual Open Source Jobs Report from the Linux Foundation and edX was released today, examining trends in open source hiring, retention, and training

SAN FRANCISCO – June 22, 2022The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and edX, a leading mundial online learning platform from 2U, Inc. (Nasdaq: TWOU), have released the 10th Annual Open Source Jobs Report, examining the demand for open source talent and trends among open source professionals.

The need for open source talent is strong in light of continuing cloud adoption and digital transformation across industries. As the COVID pandemic wanes, both retention and recruitment have become more difficult than ever, with 73% of professionals reporting it would be easy to find a new role and 93% of employers struggling to find enough skilled talent. Although the majority of open source professionals (63%) reported their employment did not change in the past year, one-in-three did report they either left or changed jobs, which puts additional pressure on employers trying to hold onto staff with necessary skills. While this may not reach levels of a “Great Resignation”, this turnover is putting more pressure on companies.

“Every business has struggled with recruiting and retaining talent this past year, and the open source industry has been no different,” said Linux Foundation Executive Director Jim Zemlin. “Organizations that want to ensure they have the talent to meet their business goals need to not only differentiate themselves to attract that talent, but also look at ways to close the skills gap by developing net new and existing talent. This report provides insights and actionable steps they can take to make that happen.”

“This year’s report found that certifications have become increasingly important as organizations continue to look for ways to close skills gaps. We see modular, stackable learning as the future of education and it’s promising to see employers continuing to recognize these alternative paths to gain the skills needed for today’s jobs,” said Anant Agarwal, edX Founder and 2U Chief Open Education Officer.

10th annual jobs report factsThe tenth annual Open Source Jobs Report examines trends in open source careers, which skills are most in-demand, the motivation for open source professionals, and how employers attract and retain qualified talent. Key findings from the Open Source Jobs Report include: 

  • There remains a shortage of qualified open source talent: The vast majority of employers (93%) report difficulty finding sufficient talent with open source skills. This trend is not going away with nearly half (46%) of employers planning to increase their open source hiring in the next six months, and 73% of open source professionals stating it would be easy to find a new role should they choose to move on.
  • Compensation has become a greater differentiating cifra: Financial incentives including salary and bonuses are the most common means of keeping talent, with two-in-three open source professionals saying a higher salary would deter them from leaving a job. With flex time and remote work becoming the industry standard, lifestyle benefits are becoming less of a consideration, making financial incentives a bigger differentiator.
  • Certifications hit new levels of importance: An overwhelming number of employers (90%) stated that they will pay for employees to obtain certifications, and 81% of professionals plan to add certifications this year, demonstrating the weight these credentials hold. The 69% of employers who are more likely to hire an open source professional with a certification also reinforces that in light of talent shortages, prior experience is becoming less of a requirement as long as someone can demonstrate they possess the skills to do the job.
  • Cloud’s continued dominance: Cloud and container technology skills remain the most in demand this year, with 69% of employers seeking hires with these skills, and 71% of open source professionals agreeing these skills are in high demand. This is unsurprising with 77% of companies surveyed reporting they grew their use of cloud in the past year. Linux skills remain in high demand as well (61% of hiring managers) which is unsurprising considering how much Linux underpins cloud computing.
  • Cybersecurity concerns are mounting: Cybersecurity skills have the fourth biggest impact on hiring decisions, reported by 40% of employers, trailing only cloud, Linux and DevOps. Amongst professionals, 77% state they would benefit from additional cybersecurity training, demonstrating that although the importance of security is being recognized more, there is work to be done to truly secure technology deployments.
  • Companies are willing to spend more to avoid delaying projects: The most common way to close skills gaps currently according to hiring managers is training (43%), followed by 41% who say they hire consultants to fill these gaps, an expensive alternative and an increase from the 37% reporting this last year. This aligns with the only 16% who are willing to delay projects, demonstrating digital transformation activities are being prioritized even if they require costly consultants.

This year’s report is based on survey responses from 1,672 open source professionals and 559 respondents with responsibility for hiring open source professionals. Surveys were fielded online during the month of March 2022.

The full 10th Annual Open Source Jobs Report is available to download here for free.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 1,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure, including Linux, Kubernetes, Node.js, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

# # #

Media Contact:
Dan Brown
The Linux Foundation
415-420-7880
dbrown@linuxfoundation.org



Source link


In recent years, DevOps, which aligns incentives and the flow of work across the organization, has become the standard way of building software. By focusing on improving the flow of value, the software development lifecycle has become much more efficient and effective, leading to positive outcomes for everyone involved. However software development and IT operations aren’t the only teams involved in the software delivery process. With increasing cybersecurity threats, it has never been more important to unify cybersecurity and other stakeholders into an effective and united value stream aligned towards continuous delivery.

At the most basic level, there is nothing separating DevSecOps from the DevOps model. However, security, and a culture designed to put security at the forefront has often been an afterthought for many organizations. But in a modern world, as costs and concerns mount from increased security attacks, it must become more prominent. It is possible to provide continuous delivery, in a secure fashion. In fact, CD enhances the security profile. Getting there takes a dedication to people, culture, process, and lastly technology, breaking down silos and unifying multi-disciplinary skill sets. Organizations can optimize and align their value streams towards continuous improvement across the entire organization. 

To help educate and inform program managers and software leaders on secure and continuous software delivery, the Linux Foundation is releasing a new, free online training course, Introduction to DevSecOps for Managers (LFS180x) on the edX platform. Pre-enrollment is now open, though the course material will not be available to learners until July 20. The course focuses on providing managers and leaders with an introduction to the foundational knowledge required to lead digital organizations through their DevSecOps journey and transformation.

LFS180x starts off by discussing what DevSecOps is and why it is important. It then provides an overview of DevSecOps technologies and principles using a simple-to-follow “Tech like I’m 10” approach. Next, the course covers topics such as value stream management, platform as product, and engineering organization improvement, all driving towards defining Continuous Delivery and explaining why it is so foundational for any organization. The course also focuses on culture, metrics, cybersecurity, and agile contracting. Upon completion, participants will understand the fundamentals required in order to successfully transform any software development organization into a digital leader.

The course was developed by Dr. Rob Slaughter and Bryan Finster. Rob is an Air Force veteran and the CEO of Defense Unicorns, a company focused on secure air gap software delivery, he is the  former co-founder and Director of the Department of Defense’s DevSecOps platform team, Platform One, co-founder of the United States Space Force Space CAMP software factory, and current member of the Navy software factory Project Blue. Bryan is a software engineer and value stream architect with over 25 years experience as a software engineer  and leading development teams delivering highly available systems for large enterprises. He founded and led the Walmart DevOps Dojo which focused on a hands-on, immersive learning approach to helping teams solve the problem of “why can’t we safely deliver today’s changes to production today?” He is the co-author of “Modern Cybersecurity: Tales from the Near-Distant Future”, the author of the “5 Minute DevOps” blog, and one of the maintainers of MinimumCD.org. He is currently a value stream architect at Defense Unicorns at Platform One. 

Enroll today to start your journey to mastering DevSecOps practices on July 20!



Source link