Hosted by Kara Swisher and Professor Scott Galloway

Every Tuesday and Friday, tech journalist Kara Swisher and NYU Professor Scott Galloway offer sharp, unfiltered insights into the biggest stories in tech, business, and politics. They make bold predictions, pick winners and losers, and bicker and banter like no one else. After all, with great power comes great scrutiny. From New York Magazine and the Vox Media Podcast Network.

As a rule, I don’t listen to tech podcasts much at all, since I write about tech almost all day. I check out podcasts about theater or culture — about as far away from my day job as I can get. However, I follow a ‘man-about-town’ guy named George Hahn on social media, who’s a lot of fun. Last year, he mentioned he’d be a guest host of the ‘Pivot’ podcast with Kara Swisher and Scott Galloway, so I checked out Pivot. It’s about tech but it’s also about culture, politics, business, you name it. So that’s become the podcast I dip into when I want to hear a bit about tech, but in a cocktail-party/talk show kind of way.” – Christine Kent, Communications Strategist, Christine Kent Communications



Source link


  • Industry experts will share their knowledge across 5G, factory floor, agriculture, government, Smart Home, and Robotics use cases
  • Speakers from  50+ companies, 20 end users, 16 countries during ONE Summit 
  • Industry experts across the expanding open networking and edge ecosystems confirmed to present insights during ONE Summit North America, November 15-16, in Seattle, WA

SAN FRANCISCO, August 31, 2022 LF Networking, the facilitator of collaboration and operational excellence across open source networking projects, announced the ONE Summit North America 2022 session schedule is now available. Taking place in Seattle, WA November 15-16, ONE Summit is the one  industry event that brings together decision makers and implementers for two days of in-depth presentations and interactive conversations around 5G, Access, Edge, Telco, Cloud, Enterprise Networking, and more open source technology developments. 

“LF Networking is proud to set a high bar with the quality of content submissions for this year’s ONE Summit, and to offer an innovative line-up of diverse sessions,” said Arpit Joshipura, Militar Manager, Networking, Edge, and IoT, the Linux Foundation. “We will also touch on gaming, robotics, 5G network automation, factory floor, agriculture and more, with a strong program based on the power of connectivity.” 

The event will feature an extensive program of 70+ diverse business and technical sessions that cover cutting-edge topics across five presentation tracks: Industry 4.0; Security; The New Networking Stack; Operational Deployments (case studies, success & challenges); and Emerging Technologies and Business Models. 

Conference Session Highlights:

ONE Summit returns in-person for the first time in two years in its best format ever! The use-case driven content is strong in breadth and depth and includes sessions from open source users with whom LF Networking is engaged for the first time. Attendees will have a choose your own adventure experience as they select from a variety of content formats from interactive sessions, panels, in-depth tutorials, to lightning talk sessions with quick glances of future- looking thought processes. 

  • Existente-world deployment stories of open source in action, from:
    • leading telco and enterprise organizations including TELUS, Google,  Deutsche Telekom, Red Hat, Verizon, Nokia, China Mobile, Equinix, Netgate, Pantheon and others. 
    • government and academic institutions including DARPA, the Naval Information Warfare Center (NWIC), UK Government, University of Southern California, Jeju National University, Georgia Tech, and others. 
  • Use case examples across the Metaverse, Robotics, Smart Home, Digital Twins, 5G Automation, Edge Orchestration, AI/ML, Kubernetes Orchestration, and more. 
  • Hands-on experiential learning and technical deep-dives in IoT and edge deployments led by expert practitioners.
  • Lightning talks offer the opportunity to quickly learn about security and emerging technologies.
  • Sessions contributing insight into open source projects across the ecosystem, including Akraino, CAMARA, eBPF, EdgeX Foundry, EVE, Nephio, OAI, OIF, ONAP, OpenSSF, ORAN-SC, SONiC, and more.

Registration

ONE Summit attendees engage directly with thought leaders across 5G, Cloud Native and Network Edge and expand knowledge of open source networking technology progression. Register today to gain fresh insights on technical and business collaboration shaping the future of networking, edge, and cloud computing.

Corporate registration is offered at the early price of US$995 through Sept. 9. Day passes are available for US$675 and Individual/Hobbyist (US$350) and  Academic/Student (US$100) passes are also available. Members of The Linux Foundation, LF Networking, and  LF Edge receive a 20 percent discount off registration and can contact events@linuxfoundation.org to request a member discount code. Members of the press who would like to request a press pass to attend should contact pr@lfnetworking.org

To register, visit  https://events.linuxfoundation.org/one-summit-north-america/register/. Corporate attendees should register before September 9, 2022 for the best rates. 

Developer & Testing Forum

ONE Summit will be followed by a complimentary, two-day LF Networking Developer and Testing Forum (DTF), a grassroots hands-on event organized by the LF Networking projects. ONE Summit attendees are encouraged to extend the experience, roll up sleeves, and join the incredible developer community to advance the open source networking and automation technologies of the future. Session videos from the Spring 2022 LFN Developer & Testing Forum, which took place June 13-16 in Porto, Portugal, are available here.

Sponsors

ONE Summit  is made possible thanks to generous sponsors, including: Diamond sponsor Dell Technologies; Gold sponsor kyndryl; Silver sponsor Futurewei Technologies; and Bronze sponsors Data Bank and Netris.ai. 

For information on becoming an event sponsor, click here or email for more information and to speak to the team.

About the Linux Foundation

Founded in 2000, the Linux Foundation is supported by more than 2,000 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, and more. Learn more at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. ###



Source link


The role of software, specifically open source software, is more influential than ever and drives today’s innovation. Maintaining and growing future innovation depends on the open source community. Enterprises that understand this are driving transformation and rising to the challenges by boosting their collaboration across industries, understanding how to support their open source developers, and contributing to the open source community.

They realize that success depends on a cohesive, dedicated, and passionate open source community, from hundreds to thousands of individuals. Their collaboration is key to achieving the project’s goals.   It can be challenging to manage all aspects of an open source project considering all the different parts that drive it. For example:

  • Project’s scope and goals
  • Participating members, maintainers, and collaborators
  • Management and governance
  • Procesal guidelines and procedures
  • IT services 
  • Source control, CI/CD, distribution, and cloud providers
  • Communication channels and social media

The Linux Foundation’s LFX provides various tools to help open source communities design and adopt a successful project strategy considering all moving parts. So how do they do it? Let’s explore that using the Hyperledger project as an example. 

1. Understand your project’s participation

Through the LFX Individual Dashboard, participants can register the identity they are using to contribute their code to GitHub and Gerrit (Since the Hyperledger project uses both). Then, the tool uses that identity to connect users’ contributions, affiliations, memberships, training, certifications, earned badges, and normal information. 

With this information, other LFX tools gather and propagate data charts to help the community visualize their participation in GitHub and Gerrit for the different Hyperledger repositories. It also displays detailed contribution metrics, code participation, and issue participation.  

LFX dashboard

LFX dashboard

The LFX Organization Dashboard is a convenient tool to help managers and organizations manage their project memberships, discover similar projects to join, and understand the team’s engagement in the community. In detail, it provides information on:

  • Code contributions
  • Committee members
  • Event speakers and attendees 
  • Training and certification
  • Project enrollments

LFX dashboard

It is fundamental to have the project’s members and participant identities organized to understand better how their work makes a difference in the project and how their participation interacts with others toward the project’s goals.  

2. Manage your project’s processes

LFX Project Control Center offers a one-stop portal for program managers to organize their project participation, IT services, and quick access to other LFX tools.

Project managers can also connect:

  • Their project’s source control
  • Issue tracking tool
  • Distribution service
  • Cloud provider
  • Mail lists
  • Meeting management
  • Wiki and hosted domains 

For example, Hyperledger can view all related organizations under their Hyperledger Foundation umbrella, analyze each participant project, and connect services like GitHub, Jira, Confluence, and their communication channels like Groups.io and Twitter accounts.

LFX dashboard

Managing all the project’s aspects in one place makes it easier for managers to visualize their project scope and better understand how all their services impact the project’s performance.

LFX dashboard

3. Reach outside and get your project in the spotlight

Social and earned media are fundamental to ensure your project reaches the ears of its consumers. In addition, it is essential to have good visibility into your project’s influence in the Open Source world and where it is making the best impact.

LFX’s Insights Social Media Metrics provides high-level metrics on a project’s social media account like:

  • Twitter followers and following information 
  • Tweets and retweet breakdown
  • Trending tweets
  • Hashtag breakdown 
  • Contributor and user mentions

In the case of Hyperledger, we have an overall view of their tweet and retweet breakdown. In addition, we can also see how tweets by Bitcoin News are making an impression on the interested communities. 

LFX dashboard

Insights help you analyze how your project impacts other regions, reaches diverse audiences by language, and adjust communication and marketing strategies to reach out to the sources that open source participants rely on to get the latest information on how the community contributes and engages with others. For example, tweets written in English, Japanese, and Spanish made by Hyperledger contributors are visible in an overall languages chart with direct and indirect impressions calculated.

LFX dashboard

The bottom line

A coherent open source project strategy is a crucial driver of how enterprises manage their open source programs across their organization and industry. LFX is one of the tools that make enterprise open source programs successful. It is an exclusive benefit for Linux Foundation members and projects. If your organization and project would like to join us, learn more about membership or hosting your project.



Source link


The diferente article appeared on the OpenSSF blog. The author, Harimohan Rajamohanan, is a Solution Architect and Full Stack Developer with Wipro Limited. Learn more about the Linux Foundation’s Developing Secure Software (LFD121) course

All software is under continuous attack today, so software architects and developers should focus on practical steps to improve information security. There are plenty of materials available online that talk about various aspects of secure development practices, but they are scattered across various articles and books. Recently, I had come across a course developed by the Open Source Security Foundation (OpenSSF), which is a part of the Linux Foundation, that is geared towards software developers, DevOps professionals, web application developers and others interested in learning the best practices of secure software development. My learning experience taking the DEVELOPING SECURE SOFTWARE (LFD121) course was positive, and I immediately started applying these learnings in my work as a software architect and developer.

“A useful trick for creating secure systems is to think like an attacker before you write the code or make a change to the code” – DEVELOPING SECURE SOFTWARE (LFD121)

My earlier understanding about software security was primarily focused on the authentication and the authorization of users. In this context the secure coding practices I was following were limited to:

  • No unauthorized read
  • No unauthorized modification
  • Ability to prove someone did something
  • Auditing and logging

It may not be broad enough to assume a software is secure if a strong authentication and authorization mechanism is present. Almost all application development today depends on open source software and it is important that developers verify the security of the open source chain of contributors and its dependencies. Recent vulnerability disclosures and supply chain attacks were an eye opener for me about the existing potential of vulnerabilities in open source software. The natural focus of majority of developers is to get the business logic working and deliver the code without any functional bugs.

The course gave me a comprehensive outlook on the secure development practices one should follow to defend from the kind of attacks that happen in modern day software.

What does risk management really mean?

The course has detailed practical advice on considering security as part of the requirements of a system. Being part of various inalterable system integrators for over a decade, I was tasked to develop application software for my customers. The functional requirements were typically written down in such projects but covered only a few aspects of security in terms of user authentication and authorization. Documenting the security requirement in detail will help developers and future maintainers of the software to have an idea of what the system is trying to accomplish for security.

Key takeaways on risk assessment:

  • Analyze security basics including risk management, the “CIA” triad, and requirements
  • Apply secure design principles such as least privilege, complete mediation, and input validation
  • Supply chain evaluation tips on how to reuse software with security in mind, including selecting, downloading, installing, and updating such software
  • Document the high-level security requirements in one place

Secure design principles while designing a software solution

Design principles are guides based on experience and practice. The software will generally be secure if you apply the secure design principles. This course covers a broad spectrum of design principles in terms of the components you trust and the components you do not trust. The key principles I learned from the course that guide me in my present-day software design areas are:

  • The user and program should operate using the least privilege. This limits the damage from error or attack.
  • Every data access or manipulation attempt should be verified and authorized using a mechanism that cannot be bypassed.
  • Access to systems should be based on more than one condition. How do you prove the identity of the authenticated user is who they claimed to be? Software should support two-factor authentication.
  • The user interface should be designed for ease of use to make sure users routinely and automatically use the protection mechanisms correctly.
  • Importance of understanding what kind of attackers you expect to counter.

A few examples on how I applied the secure design principles in my solution designs:

  • The solutions I build often use a database. I have used the SQL GRANT command to limit the privilege the program gets. In particular, the DELETE privilege is not given to any program. And I have implemented a soft delete mechanism in the program that sets the column “active = false” in the table for delete use cases.
  • The recent software designs I have been doing are based on microservice architecture where there is a clear separation between the GUI and backend services. Each part of the overall solution is authenticated separately. This may minimize the attack surface.
  • Client-side input validation is limited to counter accidental mistakes. But the coetáneo input validation happens at the server side. The API end points validates all the inputs thoroughly before processing it. For instance, a PUT API not just validates the resource modification inputs, but also makes sure that the resource is present in the database before proceeding with the update.
  • Updates are allowed only if the user consuming the API is authorized to do it.
  • Databases are not directly accessible for use by a client application.
  • All the secrets like cryptographic keys and passwords are maintained outside the program in a secure vault. This is mainly to avoid secrets in source code going into version control systems.
  • I have started to look for OpenSSF Best Practices Badge while selecting open source software and libraries in my programs. I also look for the security posture of open source software by checking the OpenSSF scorecards score.
  • Another practice I follow while using open source software is to check whether the software is maintained. Are there recent releases or announcements from the community?

Secure coding practices

In my opinion, this course covers almost all aspects of secure coding practices that a developer should focus on. The key focus areas include:

  1. Input validations
  2. How to validate numbers
  3. Key issues with text, including Unicode and locales
  4. Usage of regular expression to validate text input
  5. Importance of minimizing the attack surfaces
  6. Secure defaults and secure startup.

For example, apply API input validation on IDs to make sure that records belonging to those IDs exists in the database. This reduces the attack surface. Also make sure first that the object in the object modify request exists in the database.

  • Process data securely
  • Importance of treating untrusted data as dangerous
  • Avoid default and hardcoded credentials
  • Understand the memory safety problems such as out-of-bounds reads or writes, double-free, and use-after-free
  • Avoid undefined behavior
  • Call out to other programs
  • Securely call other programs
  • How to counter injection attacks such as SQL injection and OS command injection
  • Securely handle file names and file paths
  • Send output
  • Securely send output
  • How to counter Cross-Site scripting (XSS) attacks
  • Use HTTP hardening headers including Content Security Policy (CSP)
  • Prevent common output related vulnerability in web applications
  • How to securely format strings and templates.

Conclusion

“Security is a process – a journey – and not a simple endpoint” – DEVELOPING SECURE SOFTWARE (LFD121)

This course gives a practical guidance approach for you to develop secure software while considering security requirement, secure design principles, counter common implementation mistakes, tools to detect problems before you ship the code, promptly handle vulnerability reports. I strongly recommend this course and the certification to all developers out there.

About the author

Harimohan Rajamohanan is a Solution Architect and Full Stack Developer, Open Source Program Office, Lab45, Wipro Limited. He is an open source software enthusiast and worked in areas such as application modernization, digital transformation, and cloud native computing. Major focus areas are software supply chain security and observability.



Source link


i

Boeing to lead New Aerospace Working Group

SAN FRANCISCO – August 11, 2022 –  Today, the ELISA (Enabling Linux in Safety Applications) Project announced that Boeing has joined as a Premier member, marking its commitment to Linux and its effective use in safety critical applications. Hosted by the Linux Foundation, ELISA is an open source initiative that aims to create a shared set of tools and processes to help companies build and certify Linux-based safety-critical applications and systems.

“Boeing is modernizing software to accelerate innovation and provide greater value to our customers,” said Jinnah Hosein, Vice President of Software Engineering at the Boeing Company. “The demand for safe and secure software requires rapid iteration, integration, and validation. Standardizing around open source products enhanced for safety-critical avionics applications is a key aspect of our adoption of state-of-the-art techniques and processes.”

As a leading total aerospace company, Boeing develops, manufactures and services commercial airplanes, defense products, and space systems for customers in more than 150 countries. It’s already using Linux in current avionics systems, including commercial systems certified to DO-178C Design Assurance Level D. Joining the ELISA Project will help pursue the vision for generational change in software development at Boeing. Additionally, Boeing will work with the ELISA Technical Steering Committee (TSC) to launch a new Aerospace Working Group that will work in parallel with the other working groups like automotive, medical devices, and others.

“We want to improve industry-standard tools related to certification and assurance artifacts in order to standardize improvements and contribute new features back to the open source community. We hope to leverage open source tooling (such as a cloud-based DevSecOps software factory) and industry standards to build world class software and provide an environment that attracts industry leaders to drive cultural change at Boeing,” said Hosein.

Linux is used in all major industries because it can enable faster time to market for new features and take advantage of the quality of the code development processes. Launched in February 2019, ELISA works with Linux kernel and safety communities to agree on what should be considered when Linux is used in safety-critical systems. The project has several dedicated working groups that focus on providing resources for system integrators to apply and use to analyze qualitatively and quantitatively on their systems.

“Linux has a history of being a reliable and stable development platform that advances innovation for a wide range of industries,” said Kate Stewart, Vice President of Dependable Embedded Systems at the Linux Foundation. “With Boeing’s membership, ELISA will start a new focus in the aerospace industry, which is already using Linux in selected applications. We look forward to working with Boeing and others in the aerospace sector, to build up best practices for working with Linux in this space.”

Other ELISA Project members include ADIT, AISIN AW CO., Arm, Automotive Grade Linux, Automotive Intelligence and Control of China, Banma, BMW Car IT GmbH, Codethink, Elektrobit, Horizon Robotics, Huawei Technologies, Intel, Lotus Cars, Toyota, Kuka, Linuxtronix. Mentor, NVIDIA, SUSE, Suzuki, Wind River, OTH Regensburg, Toyota and ZTE.

Upcoming ELISA Events

The ELISA Project has several upcoming events for the community to learn more or to get involved including:

  • ELISA Summit – Hosted virtually for participants around the world on September 7-8, this event will feature overview of the project, the mission and goals for each working group and an opportunity for attendees to ask questions and network with ELISA leaders. The schedule is now live and includes speakers from Aptiv Services Deutschland GmbH, Boeing, CodeThink, The Linux Foundation, Mobileye, Red Hat and Robert Bosch GmbH. Check out the schedule here: https://events.linuxfoundation.org/elisa-summit/program/schedule/. Registration is free and open to the public. https://elisa.tech/event/elisa-summit-virtual/
  • ELISA Forum – Hosted in-person in Dublin, Ireland, on September 12, this event takes place the day before Open Source Summit Europe begins. It will feature an update on all of the working groups, an interactive System-Theoretic Process Analysis (STPA) use case and an Ask Me Anything session.  Pre-registration is required. To register for ELISA Forum, add it to your Open Source Summit Europe registration.
  • Open Source Summit Europe – Hosted in-person in Dublin and virtually on September 13-16, ELISA will have two dedicated presentations about enabling safety in safety-critical applications and safety and open source software. Learn more.

For more information about ELISA, visit https://elisa.tech/.

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

###



Source link


This post is authored by Hayden Blauzvern and originally appeared on Sigstore’s blog. Sigstore is a new standard for signing, verifying, and protecting software. It is a project of the Linux Foundation. 

Developers, package maintainers, and enterprises that would like to sigstore logo adopt Sigstore may already sign published artifacts. Signers may have existing procedures to securely store and use signing keys. Sigstore can be used to sign artifacts with existing self-managed, long-lived signing keys. Sigstore provides a simple user experience for signing, verification, and generating structured signature metadata for artifacts and container signatures. Sigstore also offers a community-operated, free-to-use transparency log for auditing signature generation.

Sigstore additionally has the ability to use code signing certificates with short-lived signing keys bound to OpenID Connect identities. This signing approach offers simplicity due to the lack of key management; however, this may be too drastic of a change for enterprises that have existing infrastructure for signing. This blog post outlines strategies to ease adoption of Sigstore while still using existing signing approaches.

Signing with self-managed, long-lived keys

Developers that maintain their own signing keys but want to migrate to Sigstore can first switch to using Cosign to generate a signature over an artifact. Cosign supports importing an existing RSA, ECDSA, or ED25519 PEM-encoded PKCS#1 or PKCS#8 key with cosign import-key-pair –key key.pem, and can sign and verify with cosign sign-blob –key cosign.key artifact-path et cosign verify-blob –key cosign.pub artifact-path.

Benefits

  • Developers can get accustomed to Sigstore tooling to sign and verify artifacts.
  • Sigstore tooling can be integrated into CI/CD pipelines.
  • For signing containers, signature metadata is published with the OCI image in an OCI registry.

Signing with self-managed keys with auditability

While maintaining their own signing keys, developers can increase auditability of signing events by publishing signatures to the Sigstore transparency log, Rekor. This allows developers to audit when signatures are generated for artifacts they maintain, and also instructor when their signing key is used to create a signature.

Developers can upload a signature to the transparency log during signing with COSIGN_EXPERIMENTAL=1 cosign sign-blob –key cosign.key artifact-path. If developers would like to use their own signing infrastructure while still publishing to a transparency log, developers can use the Rekor CLI or API. To upload an artifact and cryptographically verify its inclusion in the log using the Rekor CLI:

rekor-cli upload --rekor_server https://rekor.sigstore.dev 
  --signature  
  --public-key  
  --artifact <url_to_artifact|local_path></url_to_artifact|local_path>rekor-cli verify --rekor_server https://rekor.sigstore.dev 
  --signature  
  --public-key  
  --artifact <url_to_artifact|local_path></url_to_artifact|local_path>

In addition to PEM-encoded certificates and public keys, Sigstore supports uploading many different key formats, including PGP, Minisign, SSH, PKCS#7, and TUF. When uploading using the Rekor CLI, specify the –pki-format flag. For example, to upload an artifact signed with a PGP key:

gpg --armor -u user@example.com --output signature.asc --detach-sig package.tar.gzgpg --export --armor "user@example.com" > public.keyrekor-cli upload --rekor_server https://rekor.sigstore.dev 
  --signature signature.asc 
  --public-key public.key 
  --pki-format=pgp 
  --artifact package.tar.gz

Benefits

  • Developers begin to publish signing events for auditability.
  • Artifact consumers can create a verification policy that requires a signature be published to a transparency log.

Self-managed keys in identity-based code signing certificate with auditability

When requesting a code signing certificate from the Sigstore certificate authority Fulcio, Fulcio binds an OpenID Connect identity to a key, allowing for a verification policy based on identity rather than a key. Developers can request a code signing certificate from Fulcio with a self-managed long-lived key, sign an artifact with Cosign, and upload the artifact signature to the transparency log.

However, artifact consumers can still fail-open with verification (allow the artifact, while logging the failure) if they do not want to take a hard dependency on Sigstore (require that Sigstore services be used for signature generation). A developer can use their self-managed key to generate a signature. A verifier can simply extract the verification key from the certificate without verification of the certificate’s signature. (Note that verification can occur offline, since inclusion in a transparency log can be verified using a persisted signed bundle from Rekor and code signing certificates can be verified with the CA root certificate. See Cosign’s verification code for an example of verifying the Rekor bundle.)

Merienda a consumer takes a hard dependency on Sigstore, a CI/CD pipeline can move to fail-closed (forbid the artifact if verification fails).

Benefits

  • A stronger verification policy that enforces both the presence of the signature in a transparency log and the identity of the signer.
  • Verification policies can be enforced fail-closed.

Identity-based (“keyless”) signing

This final step is added for completeness. Signing is done using code signing certificates, and signatures must be published to a transparency log for verification. With identity-based signing, fail-closed is the only option, since Sigstore services must be online to retrieve code signing certificates and append entries to the transparency log. Developers will no longer need to maintain signing keys.

Conclusion

The Sigstore tooling and infrastructure can be used as a whole or modularly. Each separate integration can help to improve the security of artifact distribution while allowing for incremental updates and verifying each step of the integration.



Source link


We are happy to announce the release of the Delta Lake 2.0 (pypi, maven, release notes) on Apache Spark™ 3.2, with the following features including but not limited to:

The significance of Delta Lake 2.0 is not just a number – though it is timed quiebro nicely with Delta Lake’s 3rd birthday. It reiterates our collective commitment to the open-sourcing of Delta Lake, as announced by Michael Armbrust’s Day 1 keynote at Data + AI Summit 2022.

What’s new in Delta Lake 2.0?

There have been a lot of new features released in the last year between Delta Lake 1.0, 1.2, and now 2.0. This blog will review a few of these specific features that are going to have a large impact on your workload.

Delta 1.2 vs Delta 2.0 chart

Improving data skipping

When exploring or slicing data using dashboards, data practitioners will often run queries with a specific filter in place. As a result, the matching data is often buried in a large table, requiring Delta Lake to read a significant amount of data. With data skipping via column statistics and Z-Order, the data can be clustered by the most common filters used in queries — sorting the table to skip irrelevant data, which can dramatically increase query performance.

Support for data skipping via column statistics

When querying any table from HDFS or cloud object storage, by default, your query engine will scan all of the files that make up your table. This can be inefficient, especially if you only need a smaller subset of data. To improve this process, as part of the Delta Lake 1.2 release, we included support for data skipping by utilizing the Delta table’s column statistics.

For example, when running the following query, you do not want to unnecessarily read files outside of the year or uid ranges.

Select & from events example

When Delta Lake writes a table, it will automatically collect the minimum and maximum values and store this directly into the Delta log (i.e. column statistics). Therefore, when a query engine reads the transaction log, those read queries can skip files outside the range of the min/max values as visualized below.

code example

This approach is more efficient than row-group filtering within the Parquet file itself, as you do not need to read the Parquet footer. For more information on the latter process, please refer to How Apache Spark™ performs a fast count using the parquet metadata. For more information on data skipping, please refer to data skipping.

Support Z-Order clustering of data to reduce the amount of data read

But data skipping using column statistics is only one part of the solution. To maximize data skipping, what is also needed is the ability to skip with data clustering. As implied previously, data skipping is most effective when files have a very small minimum/maximum range. While sorting the data can help, this is most effective when applied to a single column.

Optimize deltaTable ZORDER BY (x, y)

Regular sorting of data by primary and secondary columns (left) and 2-dimensional Z-order data clustering for two columns (right).

But with ​​Z-order, its space-filling curve provides better multi-column data clustering. This data clustering allows column stats to be more effective in skipping data based on filters in a query. See the documentation et this blog for more details.

Support Change Data Feed on Delta tables

One of the biggest value propositions of Delta Lake is its ability to maintain data reliability in the face of changing records brought on by data streams. However, this requires scanning and reading the entire table, creating significant overhead that can slow performance.

With Change Data Feed (CDF), you can now read a Delta table’s change feed at the row level rather than the entire table to capture and manage changes for up-to-date silver and gold tables. This improves your data pipeline performance and simplifies its operations.

To enable CDF, you must explicitly use one of the following methods:

  • New table: Set the table property delta.enableChangeDataFeed = true in the CREATE TABLE command.

    CREATE TABLE student (id INT, name STRING, age INT) TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • Existing table: Set the table property delta.enableChangeDataFeed = true in the ALTER TABLE command.

    ALTER TABLE myDeltaTable SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • All new tables:

    set spark.databricks.delta.properties.defaults.enableChangeDataFeed = true;

An important thing to remember is merienda you enable the change data feed option for a table, you can no longer write to the table using Delta Lake 1.2.1 or below. However, you can always read the table. In addition, only changes made after you enable the change data feed are recorded; past changes to a table are not captured.

So when should you enable Change Data Feed? The following use cases should drive when you enable the change data feed.

  • Silver and Gold tables: When you want to improve Delta Lake performance by streaming row-level changes for up-to-date silver and gold tables. This is especially apparent when following MERGEUPDATE, or DELETE operations accelerating and simplifying ETL operations.
  • Transmit changes: Send a change data feed to downstream systems such as Kafka or RDBMS that can use the feed to process later stages of data pipelines incrementally.
  • Audit trail table: Capture the change data feed as a Delta table provides perpetual storage and efficient query capability to see all changes over time, including when deletes occur and what updates were made.

Support for dropping columns

For versions of Delta Lake prior to 1.2, there was a requirement for Parquet files to store data with the same column name as the table schema. Delta Lake 1.2 introduced a mapping between the logical column name and the physical column name in those Parquet files. While the physical names remain unique, the logical column renames become a simple change in the mapping et logical column names can have arbitrary characters while the physical name remains Parquet-compliant.

Before column mapping and with column mapping

As part of the Delta Lake 2.0 release, we leveraged column mapping so that dropping a column is a metadata operation. Therefore, instead of physically modifying all of the files of the underlying table to drop a column, this can be a simple modification to the Delta transaction log (i.e. a metadata operation) to reflect the column removal. Run the following SQL command to drop a column:

ALTER TABLE myDeltaTable DROP COLUMN myColumn

See documentation for more details.

Support for Dynamic Partition Overwrites

In addition, Delta Lake 2.0 now supports Delta dynamic partition overwrite mode for partitioned tables; that is, overwrite only the partitions with data written into them at runtime.

When in dynamic partition overwrite mode, we overwrite all existing data in each logical partition for which the write will commit new data. Any existing logical partitions for which the write does not contain data will remain unchanged. This mode is only applicable when data is being written in overwrite mode: either INSERT OVERWRITE in SQL, or a DataFrame write with df.write.mode("overwrite"). In SQL, you can run the following commands:

SET spark.sql.sources.partitionOverwriteMode=dynamic;
INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;

Note, dynamic partition overwrite conflicts with the option replaceWhere for partitioned tables. For more information, see the documentation for details.

Additional Features in Delta Lake 2.0

In the spirit of performance optimizations, Delta Lake 2.0.0 also includes these additional features:

  • Support for idempotent writes to Delta tables to enable fault-tolerant retry of Delta table writing jobs without writing the data multiple times to the table. See the documentation for more details.
  • Positivo support for multi-part checkpoints to split the Delta Lake checkpoint into multiple parts to speed up writing the checkpoints and reading. See documentation for more details.
  • Other importante changes
    • Améliorer the generated column data skipping by adding the support for skipping by nested column generated column
    • Améliorer the table schema validation by blocking the unsupported data types in Delta Lake.
    • Support creating a Delta Lake table with an empty schema.
    • Change the behavior of DROP CONSTRAINT to throw an error when the constraint does not exist. Before this version, the command used to return silently.
    • Fix the symlink manifest generation when partition values contain space in them.
    • Fix an issue where incorrect commit stats are collected.
    • More ways to access the Delta table OPTIMIZE file compaction command.

Building a Robust Data Ecosystem

As noted in Michael Armbrust’s Day 1 keynote and our Dive into Delta Lake 2.0 session, a fundamental aspect of Delta Lake is the robustness of its data ecosystem.

Optimize ZOrder

As data volume and variety continue to rise, the need to integrate with the most common ingestion engines is critical. For example, we’ve recently announced integrations with Apache Flink, Presto, and Trino — allowing you to read and write to Delta Lake directly from these popular engines. Check out Delta Lake > Integrations for the latest integrations.

Delta's expanding ecosystem of connectors

Delta Lake will be relied on even more to bring reliability and improved performance to data lakes by providing ACID transactions and unifying streaming and batch transactions on top of existing cloud data stores. By building connectors with the most popular compute engines and technologies, the appeal of Delta Lake will continue to increase — driving more growth in the community and rapid adoption of the technology across the most innovative and largest enterprises in the world.

Updates on Community Expansion and Growth

We are proud of the community and the tremendous work over the years to deliver the most reliable, scalable, and performant table storage format for the lakehouse to ensure consistent high-quality data. None of this would be possible without the contributions from the open-source community. In the span of a year, we have seen the number of downloads skyrocket from 685K monthly downloads to over 7M downloads/month. As noted in the following figure, this growth is in no small part due to the quickly expanding Delta ecosystem.

The most widely used lakehouse format in the world

All of this activity and the growth in unique contributions — including commits, PRs, changesets, and bug fixes — has culminated in an increase in contributor strength by 633% during the last three years (Source: The Linux Foundation Insights).

But it is important to remember that we could not have done this without the contributions of the community.

Credits

Saying this, we wanted to provide a quick shout-out to all of those involved with the release of Delta Lake 2.0: Adam Binford, Alkis Evlogimenos, Allison Portis, Ankur Dave, Bingkun Pan, Burak Yilmaz, Chang Yong Lik, Chen Qingzhi, Denny Lee, Eric Chang, Felipe Pessoto, Fred Liu, Fu Chen, Gaurav Rupnar, Grzegorz Kołakowski, Hussein Nagree, Jacek Laskowski, Jackie Zhang, Jiaan Geng, Jintao Shen, Jintian Liang, John O’Dwyer, Junyong Lee, Kam Cheung Ting, Karen Feng, Koert Kuipers, Lars Kroll, Liwen Sun, Lukas Rupprecht, Max Gekk, Michael Mengarelli, Min Yang, Naga Raju Bhanoori, Nick Grigoriev, Nick Karpov, Ole Sasse, Patrick Grandjean, Peng Zhong, Prakhar Jain, Rahul Shivu Mahadev, Rajesh Parangi, Ruslan Dautkhanov, Sabir Akhadov, Scott Sandre, Serge Rielau, Shixiong Zhu, Shoumik Palkar, Tathagata Das, Terry Kim, Tyson Condie, Venki Korukanti, Vini Jaiswal, Wenchen Fan, Xinyi, Yijia Cui, Yousry Mohamed.

We’d also like to thank Nick Karpov and Scott Sandre for their help with this post.

How can you help?

We’re always excited to work with current and new community members. If you’re interested in helping the Delta Lake project, please join our community today through many forums, including GitHub, Slack, Twitter, LinkedIn, YouTube, and Google Groups.

Join the community today



Source link


The following post originally appeared on Medium. The author, Ruchi Pakhle, participated in our LFX Mentorship program this past spring.

Hey everyone!
I am Ruchi Pakhle currently pursuing my Bachelor’s in Computer Engineering from MGM’s College of Engineering & Technology. I am a passionate developer and an open-source enthusiast. I recently graduated from LFX Mentorship Program. In this blog post, I will share my experience of contributing to Open Horizon, a platform for deploying container-based workloads and related machine learning models to compute nodes/clusters on edge.

I have been an active contributor to open-source projects via different programs like GirlScript Summer of Code, Script Winter of Code & so on.. through these programs I contributed to different beginner-level open-source projects. After almost doing this for a year, I contributed to different organizations for different projects including documentation and code. On a very random morning applications for LFX were opened up and I saw various posts on LinkedIn among that posts one post was of my very dear friend Unnati Chhabra, she had just graduated from the program and hence I went ahead and checked the organization that was a fit as per my skill set and decided to give it a shot.

I was very interested in DevOps and Cloud Native technologies and I wanted to get started with them but have been procrastinating a lot and did not know how to pave my path ahead. I was constantly looking for opportunities that I can get my hands on. And as Open Horizon works exactly on DevOps and Cloud Native technologies, I straight away applied to their project and they had two slots open for the spring cohort. I joined their element channel and started becoming active by contributing to the project, engaging with the community, and also started to read more about the architecture and tried to understand it well by referring to their youtube videos. You can contribute to Open Horizon here.

Linux Foundation opens LFX mentorship applications thrice a year: one in spring, one in summer, and the winter cohort, each cohort being for a span of 3 months. I applied to the winter cohort for which the applications opened up around February 2022 and I submitted my application on 4th February 2022 for the Open Horizon Project. I remember there were three documents mandatory for submitting the application:

1. Updated Resume/CV

2. Cover Letter

(this is very very important in terms of your selection so cover everything in your cover letter and maybe add links to your projects, achievements, or wherever you think they can add great value)

The cover letter should cover these points primarily👇

  • How did you find out about our mentorship program?
  • Why are you interested in this program?
  • What experience and knowledge/skills do you have that are applicable to this program?
  • What do you hope to get out of this mentorship experience?

3. A permission document from your university stating they have no obligation over the entire span of the mentorship was also required (this depends on org to org and may not be asked as well)

The LFX acceptance mail was a major achievement for me as at that period of time I was constantly getting rejections and I had absolutely no idea about how things were gonna work out for me. I was constantly doubting myself and hence this mail not only boosted my confidence but also gave me a ray of hope of achieving things by working hard towards it consistently. A major thanks to my mentor, Joe Pearson, and Troy Fine for believing in me and giving me this opportunity.⭐

Starting off from the day I applied to the LFX until getting selected as an LFX Mentee and working successfully for over 3 months and a half, it felt surreal. I have been contributing to open-source projects and organizations before. But being a part of LFX gave me such a huge learning curve and a sense of credibility and ownership that I got here wouldn’t have gotten anywhere else.

I have been contributing to open-source projects and organizations before. But being a part of LFX gave me such a huge learning curve and a sense of credibility and ownership that I got here wouldn’t have gotten anywhere else.

I still remember setting up the mgmt-hub all-in-one script locally and I thought it was just a cakewalk, well it was not. I literally used to try every single day to run the script but somehow it would end up giving some errors, I used to google them and apply the results but still, it would fail. But one thing which I consistently did was share my progress regularly with my mentor, Troy no matter if the script used to fail but still I used to communicate that with Troy, I would send him logs and he used to give me some probable solutions for the same but still the script used to fail. I then messaged in the open-horizon-examples group and Joe used to help with my doubts, a huge thanks to him and Troy for helping me figure out things patiently. After over a month on April 1st, the script got successfully executed and then I started to work on the issues assigned by Troy.

These three months taught me to be consistent no matter what the circumstances are and work patiently which I wouldn’t have learned in my college. This experience would no doubt make me a better developer and engineer along with the best practices followed. A timeline of my journey has been shared here.

  1. Checkout my contributions here
  2. Checkout open-horizon-services repo

The LFX Mentorship Program was a great great experience and I did get a great learning curve which I wouldn’t have gotten any other way. The program not only encourages developers to kick-start their open-source journey but also provides some great perks like networking, and learning from the best minds. I would like to thank my mentors Joe Pearson, Troy Fine, and Glen Darling because without their support and patience this wouldn’t have been possible. I would be forever grateful for this opportunity.

Special thanks to my mentor Troy for always being patient with me. These kind words would remain with me always although the program would have ended.

And yes how can I forget to plug in the awesome swags, special thanks, and gratitude to my mentor Joe Pearson for sending me such cool swags and this super cool note ❤handwritten thank you note from joe pearson

If you have any queries, connect with me on LinkedIn or Twitter and I would be happy to help you out 😀





Source link


Completo visionaries headline the premier open source event in Europe to share on OSS adoption in Europe, driving the circular economy, finding inspiration through the pandemic, supply chain security and more.

SAN FRANCISCO, August 4, 2022 —  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the keynote speakers for Open Source Summit Europe, taking place September 13-16 in Dublin, Ireland. The event is being produced in a hybrid format, with both in-person and supuesto participation available, and is co-located with the Hyperledger Completo Forum, OpenSSF Day, Linux Kernel Maintainer Summit, KVM Forum, and Linux Security Summit, among others.

Open Source Summit Europe is the leading conference for developers, sys admins and community leaders – to gather to collaborate, share information, gain insights, solve technical problems and further innovation. It is a conference umbrella, composed of 13 events covering the most important technologies and issues in open source including LinuxCon, Embedded Linux Conference, OSPOCon, SupplyChainSecurityCon, CloudOpen, Open AI + Data Forum, and more. Over 2,000 are expected to attend.

2022 Keynote Speakers Include:

  • Hilary Carter, Vice President of Research, The Linux Foundation
  • Bryan Che, Chief Strategy Officer, Huawei; Cloud Native Computing Foundation Governing Board Member & Open 3D Foundation Governing Board Member
  • Demetris Cheatham, Senior Director, Diversity, Inclusion & Belonging Strategy, GitHub
  • Gabriele Columbro, Executive Director, Fintech Open Source Foundation (FINOS)
  • Dirk Hohndel, Chief Open Source Officer, Cardano Foundation
  • ​​Ross Mauri, Militar Manager, IBM LinuxONE
  • Dušan Milovanović, Health Intelligence Architect, World Health Organization
  • Mark Pollock, Explorer, Founder & Collaborator
  • Christopher “CRob” Robinson, Director of Security Communications, Product Assurance and Security, Intel Corporation
  • Emilio Salvador, Head of Standards, Open Source Program Office, Google
  • Robin Teigland, Professor of Strategy, Management of Digitalization, in the Entrepreneurship and Strategy Division, Chalmers University of Technology; Director, Ocean Data Factory Sweden and Founder, Peniche Ocean Watch Initiative (POW)
  • Linus Torvalds, Creator of Linux and Git
  • Jim Zemlin, Executive Director, The Linux Foundation

Additional keynote speakers will be announced soon. 

Registration (in-person) is offered at the price of US$1,000 through August 23. Registration to attend virtually is $25. Members of The Linux Foundation receive a 20 percent discount off registration and can contact events@linuxfoundation.org to request a member discount code. 

Health and Safety
In-person attendees will be required to show proof of COVID-19 vaccination or provide a negative COVID-19 test to attend, and will need to comply with all on-site health measures, in accordance with The Linux Foundation Code of Conduct. To learn more, visit the Health & Safety webpage.

Event Sponsors
Open Source Summit Europe 2022 is made possible thanks to our sponsors, including Diamond Sponsors: AWS, Google and IBM, Platinum Sponsors: Huawei, Intel and OpenEuler, and Gold Sponsors: Cloud Native Computing Foundation, Codethink, Docker, Mend, NGINX, Red Hat, and Styra. For information on becoming an event sponsor, click here or email us.

Press
Members of the press who would like to request a press pass to attend should contact Kristin O’Connell.

ABOUT THE LINUX FOUNDATION
Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at https://linuxfoundation.org/

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

Visit our website and follow us on Twitter, LinkedIn, and Facebook for all the latest event updates and announcements.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. 

###

Media Contact
Kristin O’Connell
The Linux Foundation
koconnell@linuxfoundation.org





Source link


LISLE, IL., August 3, 2022 — The American Association of Insurance Services (AAIS) and the Linux Foundation welcome Jefferson Braswell as the new Executive Director of the openIDL Project.

“AAIS is excited about the expansion of openIDL in the insurance space openIDL logo and the addition of Jefferson as Executive Director signals even more strength and momentum to the fast-developing project,” said Ed Kelly, AAIS Executive Director. “We are happy to continue to work with the Linux Foundation to help affect meaningful, positive change for the insurance ecosystem.”

“openIDL is a Linux Foundation Open Governance Network and the first of its kind in the insurance industry,” said Daniela Barbosa, Normal Manager of Blockchain, Healthcare and Identity at the Linux Foundation. “It leverages open source code and community governance for objective transparency and accountability among participants with strong executive leadership helping shepherd this type of open governance networks. Jeff Braswell’s background and experience in financial standards initiatives and consortium building aligns very well with openIDL’s next growth and expansion period.“

Braswell has been successfully providing leading-edge business solutions for information-intensive enterprises for over 30 years. As a founding Director, he recently completed a 6-year term on the Board of the Global Legal Entity Identifier Foundation (GLEIF), where he chaired the Technology, Operations and Standards Committee. He is also the Chair of the Algorithmic Contract Types Unified Standards Foundation (ACTUS), and he has actively participated in international financial data standards initiatives.

Previously, as Co-Founder and President of Berkeley-based Risk Management Technologies (RMT), Braswell designed and led the successful implementation of advanced, firm-wide risk management solutions integrated with enterprise-wide data management tools. They were used by  many of the world’s largest financial institutions, including Wells Fargo, Credit Suisse, Chase, PNC, Sumitomo Mitsui Banking Corporation, Mellon, Wachovia, Union Bank and ANZ.

“We appreciate the foundation that AAIS laid for openIDL, and I look forward to bringing my expertise and knowledge to progress this project forward,” shared Braswell. “Continuing the work with the Linux Foundation to positively impact insurance services through open-source technology is exciting and will surely change the industry for the better moving forward.” 

openIDL, an open source, distributed ledger platform, infuses efficiency, transparency and security into regulatory reporting. With openIDL, insurers fulfill requirements while retaining the privacy of their data. Regulators have the transparency and insights they need, when they need them. Initially developed by AAIS, expressly for its Members, openIDL is now being further advanced by the Linux Foundation as an open-source ecosystem for the entire insurance industry.

ABOUT AAIS
Established in 1936, AAIS serves the Property & Casualty insurance industry as the only national nonprofit advisory organization governed by its Member insurance carriers. AAIS delivers tailored advisory solutions including best-in-class policy forms, rating information and data management capabilities for commercial lines, inland marine, farm & agriculture and personal lines insurers. Its consultative approach, unrivaled customer service and modern technical capabilities underscore a focused commitment to the success of its members. AAIS also serves as the administrator of openIDL, the insurance industry’s regulatory blockchain, providing unbiased governance within existing insurance regulatory frameworks. For more information about AAIS, please visit www.aaisonline.com.

ABOUT THE LINUX FOUNDATION

Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at https://linuxfoundation.org.

ABOUT openIDL
openIDL (open Insurance Data Link) is an open blockchain network that streamlines regulatory reporting and provides new insights for insurers, while enhancing timeliness, accuracy, and value for regulators. openIDL is the first open blockchain platform that enables the efficient, secure, and permissioned-based collection and sharing of statistical data. For more information, please visit www.openidl.org.

###

MEDIA CONTACT:

AAIS
John Greene
Director – Marketing & Communications
630.457.3238
johng@AAISonline.com

Linux Foundation

Dan Whiting
Director of Media Relations and Content
202-531-9091
dwhiting@linuxfoundation.org



Source link