Tag Archivio per: Open


The role of software, specifically open source software, is more influential than ever and drives today’s innovation. Maintaining and growing future innovation depends on the open source community. Enterprises that understand this are driving transformation and rising to the challenges by boosting their collaboration across industries, understanding how to support their open source developers, and contributing to the open source community.

They realize that success depends on a cohesive, dedicated, and passionate open source community, from hundreds to thousands of individuals. Their collaboration is key to achieving the project’s goals.   It can be challenging to manage all aspects of an open source project considering all the different parts that drive it. For example:

  • Project’s scope and goals
  • Participating members, maintainers, and collaborators
  • Management and governance
  • Procesal guidelines and procedures
  • IT services 
  • Source control, CI/CD, distribution, and cloud providers
  • Communication channels and social media

The Linux Foundation’s LFX provides various tools to help open source communities design and adopt a successful project strategy considering all moving parts. So how do they do it? Let’s explore that using the Hyperledger project as an example. 

1. Understand your project’s participation

Through the LFX Individual Dashboard, participants can register the identity they are using to contribute their code to GitHub and Gerrit (Since the Hyperledger project uses both). Then, the tool uses that identity to connect users’ contributions, affiliations, memberships, training, certifications, earned badges, and normal information. 

With this information, other LFX tools gather and propagate data charts to help the community visualize their participation in GitHub and Gerrit for the different Hyperledger repositories. It also displays detailed contribution metrics, code participation, and issue participation.  

LFX dashboard

LFX dashboard

The LFX Organization Dashboard is a convenient tool to help managers and organizations manage their project memberships, discover similar projects to join, and understand the team’s engagement in the community. In detail, it provides information on:

  • Code contributions
  • Committee members
  • Event speakers and attendees 
  • Training and certification
  • Project enrollments

LFX dashboard

It is fundamental to have the project’s members and participant identities organized to understand better how their work makes a difference in the project and how their participation interacts with others toward the project’s goals.  

2. Manage your project’s processes

LFX Project Control Center offers a one-stop portal for program managers to organize their project participation, IT services, and quick access to other LFX tools.

Project managers can also connect:

  • Their project’s source control
  • Issue tracking tool
  • Distribution service
  • Cloud provider
  • Mail lists
  • Meeting management
  • Wiki and hosted domains 

For example, Hyperledger can view all related organizations under their Hyperledger Foundation umbrella, analyze each participant project, and connect services like GitHub, Jira, Confluence, and their communication channels like Groups.io and Twitter accounts.

LFX dashboard

Managing all the project’s aspects in one place makes it easier for managers to visualize their project scope and better understand how all their services impact the project’s performance.

LFX dashboard

3. Reach outside and get your project in the spotlight

Social and earned media are fundamental to ensure your project reaches the ears of its consumers. In addition, it is essential to have good visibility into your project’s influence in the Open Source world and where it is making the best impact.

LFX’s Insights Social Media Metrics provides high-level metrics on a project’s social media account like:

  • Twitter followers and following information 
  • Tweets and retweet breakdown
  • Trending tweets
  • Hashtag breakdown 
  • Contributor and user mentions

In the case of Hyperledger, we have an overall view of their tweet and retweet breakdown. In addition, we can also see how tweets by Bitcoin News are making an impression on the interested communities. 

LFX dashboard

Insights help you analyze how your project impacts other regions, reaches diverse audiences by language, and adjust communication and marketing strategies to reach out to the sources that open source participants rely on to get the latest information on how the community contributes and engages with others. For example, tweets written in English, Japanese, and Spanish made by Hyperledger contributors are visible in an overall languages chart with direct and indirect impressions calculated.

LFX dashboard

The bottom line

A coherent open source project strategy is a crucial driver of how enterprises manage their open source programs across their organization and industry. LFX is one of the tools that make enterprise open source programs successful. It is an exclusive benefit for Linux Foundation members and projects. If your organization and project would like to join us, learn more about membership or hosting your project.



Source link


Responsively app Linux

Responsively App is a free and open source dev tool for responsive web development, available for Linux, Microsoft Windows and macOS. It’s a modified browser that uses Electron, which shows a web app on multiple devices at the same time and in a single window with mirrored user interactions, DevTools, and more.

The application had its first public release back in March 2020, and is already finta popular, but I’ve only recently stumbled upon it and thought I’d share it with you.

Main Responsively App features:

  • Mirrored user-interactions across all devices: an action (like click, scroll, etc.) performed on one device is mirrored on all other devices
  • Customizable preview layout
  • A single element inspector for all devices in preview
  • 30+ built-in device profiles with option to add custom devices (including a special responsive mode device for freely resizing a screen)
  • One-click screenshot all your devices (full page screenshots of all devices or just a single device)
  • Utilitario-reload for all devices in real-time for every HTML / CSS / JS save

The application also includes a live CSS editor, touch mode, design mode that allows users to edit HTML directly without dev tools, network speed emulation options, teleobjetivo, disable SSL validation, and support for various protocols (file://, ftp://, etc.), and much, much more.

Using Responsively App, you also get network proxy support, light and dark themes and shortcut keys.

You might like: How To Enable Hardware Accelerated Video Decode In Google Chrome, Brave, Vivaldi And Opera Browsers On Debian, Ubuntu Or Linux Mint

There are also optional browser extensions (for Chrome, Firefox and Edge) that you can use to easily send links from your web browser to Responsively App to preview instantly.

In the future, the plan is to add features like built-in Lighthouse metrics, browser tabs and a screenshot gallery, among many other improvements and tweaks.

It’s also worth noting that while there are finta a few alternatives to Responsively App, like Polypane or Sizzy, most of them are closed source / paid. From what I could find, Responsively App is the only one that’s free and open source, thought I may have missed some app.

As for Linux packages, Responsively App is packaged as an RPM for Fedora, openSUSE, etc., and as an AppImage, which should work on most Linux distributions.

You might also like: Save Web Pages As Single HTML Files For Offline Use With Monolith (Console)

Download Responsively App

To use the AppImage on Linux, right-click it, choose Properties and under Permissions, look for an option to allow executing the file as a program (this varies between file managers). Then to launch it, simply double-click the .AppImage file.


We are happy to announce the release of the Delta Lake 2.0 (pypi, maven, release notes) on Apache Spark™ 3.2, with the following features including but not limited to:

The significance of Delta Lake 2.0 is not just a number – though it is timed quiebro nicely with Delta Lake’s 3rd birthday. It reiterates our collective commitment to the open-sourcing of Delta Lake, as announced by Michael Armbrust’s Day 1 keynote at Data + AI Summit 2022.

What’s new in Delta Lake 2.0?

There have been a lot of new features released in the last year between Delta Lake 1.0, 1.2, and now 2.0. This blog will review a few of these specific features that are going to have a large impact on your workload.

Delta 1.2 vs Delta 2.0 chart

Improving data skipping

When exploring or slicing data using dashboards, data practitioners will often run queries with a specific filter in place. As a result, the matching data is often buried in a large table, requiring Delta Lake to read a significant amount of data. With data skipping via column statistics and Z-Order, the data can be clustered by the most common filters used in queries — sorting the table to skip irrelevant data, which can dramatically increase query performance.

Support for data skipping via column statistics

When querying any table from HDFS or cloud object storage, by default, your query engine will scan all of the files that make up your table. This can be inefficient, especially if you only need a smaller subset of data. To improve this process, as part of the Delta Lake 1.2 release, we included support for data skipping by utilizing the Delta table’s column statistics.

For example, when running the following query, you do not want to unnecessarily read files outside of the year or uid ranges.

Select & from events example

When Delta Lake writes a table, it will automatically collect the minimum and maximum values and store this directly into the Delta log (i.e. column statistics). Therefore, when a query engine reads the transaction log, those read queries can skip files outside the range of the min/max values as visualized below.

code example

This approach is more efficient than row-group filtering within the Parquet file itself, as you do not need to read the Parquet footer. For more information on the latter process, please refer to How Apache Spark™ performs a fast count using the parquet metadata. For more information on data skipping, please refer to data skipping.

Support Z-Order clustering of data to reduce the amount of data read

But data skipping using column statistics is only one part of the solution. To maximize data skipping, what is also needed is the ability to skip with data clustering. As implied previously, data skipping is most effective when files have a very small minimum/maximum range. While sorting the data can help, this is most effective when applied to a single column.

Optimize deltaTable ZORDER BY (x, y)

Regular sorting of data by primary and secondary columns (left) and 2-dimensional Z-order data clustering for two columns (right).

But with ​​Z-order, its space-filling curve provides better multi-column data clustering. This data clustering allows column stats to be more effective in skipping data based on filters in a query. See the documentation e this blog for more details.

Support Change Data Feed on Delta tables

One of the biggest value propositions of Delta Lake is its ability to maintain data reliability in the face of changing records brought on by data streams. However, this requires scanning and reading the entire table, creating significant overhead that can slow performance.

With Change Data Feed (CDF), you can now read a Delta table’s change feed at the row level rather than the entire table to capture and manage changes for up-to-date silver and gold tables. This improves your data pipeline performance and simplifies its operations.

To enable CDF, you must explicitly use one of the following methods:

  • New table: Set the table property delta.enableChangeDataFeed = true in the CREATE TABLE command.

    CREATE TABLE student (id INT, name STRING, age INT) TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • Existing table: Set the table property delta.enableChangeDataFeed = true in the ALTER TABLE command.

    ALTER TABLE myDeltaTable SET TBLPROPERTIES (delta.enableChangeDataFeed = true)
  • All new tables:

    set spark.databricks.delta.properties.defaults.enableChangeDataFeed = true;

An important thing to remember is merienda you enable the change data feed option for a table, you can no longer write to the table using Delta Lake 1.2.1 or below. However, you can always read the table. In addition, only changes made after you enable the change data feed are recorded; past changes to a table are not captured.

So when should you enable Change Data Feed? The following use cases should drive when you enable the change data feed.

  • Silver and Gold tables: When you want to improve Delta Lake performance by streaming row-level changes for up-to-date silver and gold tables. This is especially apparent when following MERGEUPDATE, or DELETE operations accelerating and simplifying ETL operations.
  • Transmit changes: Send a change data feed to downstream systems such as Kafka or RDBMS that can use the feed to process later stages of data pipelines incrementally.
  • Audit trail table: Capture the change data feed as a Delta table provides perpetual storage and efficient query capability to see all changes over time, including when deletes occur and what updates were made.

Support for dropping columns

For versions of Delta Lake prior to 1.2, there was a requirement for Parquet files to store data with the same column name as the table schema. Delta Lake 1.2 introduced a mapping between the logical column name and the physical column name in those Parquet files. While the physical names remain unique, the logical column renames become a simple change in the mapping e logical column names can have arbitrary characters while the physical name remains Parquet-compliant.

Before column mapping and with column mapping

As part of the Delta Lake 2.0 release, we leveraged column mapping so that dropping a column is a metadata operation. Therefore, instead of physically modifying all of the files of the underlying table to drop a column, this can be a simple modification to the Delta transaction log (i.e. a metadata operation) to reflect the column removal. Run the following SQL command to drop a column:

ALTER TABLE myDeltaTable DROP COLUMN myColumn

See documentation for more details.

Support for Dynamic Partition Overwrites

In addition, Delta Lake 2.0 now supports Delta dynamic partition overwrite mode for partitioned tables; that is, overwrite only the partitions with data written into them at runtime.

When in dynamic partition overwrite mode, we overwrite all existing data in each logical partition for which the write will commit new data. Any existing logical partitions for which the write does not contain data will remain unchanged. This mode is only applicable when data is being written in overwrite mode: either INSERT OVERWRITE in SQL, or a DataFrame write with df.write.mode("overwrite"). In SQL, you can run the following commands:

SET spark.sql.sources.partitionOverwriteMode=dynamic;
INSERT OVERWRITE TABLE default.people10m SELECT * FROM morePeople;

Note, dynamic partition overwrite conflicts with the option replaceWhere for partitioned tables. For more information, see the documentation for details.

Additional Features in Delta Lake 2.0

In the spirit of performance optimizations, Delta Lake 2.0.0 also includes these additional features:

  • Support for idempotent writes to Delta tables to enable fault-tolerant retry of Delta table writing jobs without writing the data multiple times to the table. See the documentation for more details.
  • Positivo support for multi-part checkpoints to split the Delta Lake checkpoint into multiple parts to speed up writing the checkpoints and reading. See documentation for more details.
  • Other importante changes
    • Migliorare the generated column data skipping by adding the support for skipping by nested column generated column
    • Migliorare the table schema validation by blocking the unsupported data types in Delta Lake.
    • Support creating a Delta Lake table with an empty schema.
    • Change the behavior of DROP CONSTRAINT to throw an error when the constraint does not exist. Before this version, the command used to return silently.
    • Fix the symlink manifest generation when partition values contain space in them.
    • Fix an issue where incorrect commit stats are collected.
    • More ways to access the Delta table OPTIMIZE file compaction command.

Building a Robust Data Ecosystem

As noted in Michael Armbrust’s Day 1 keynote and our Dive into Delta Lake 2.0 session, a fundamental aspect of Delta Lake is the robustness of its data ecosystem.

Optimize ZOrder

As data volume and variety continue to rise, the need to integrate with the most common ingestion engines is critical. For example, we’ve recently announced integrations with Apache Flink, Presto, and Trino — allowing you to read and write to Delta Lake directly from these popular engines. Check out Delta Lake > Integrations for the latest integrations.

Delta's expanding ecosystem of connectors

Delta Lake will be relied on even more to bring reliability and improved performance to data lakes by providing ACID transactions and unifying streaming and batch transactions on top of existing cloud data stores. By building connectors with the most popular compute engines and technologies, the appeal of Delta Lake will continue to increase — driving more growth in the community and rapid adoption of the technology across the most innovative and largest enterprises in the world.

Updates on Community Expansion and Growth

We are proud of the community and the tremendous work over the years to deliver the most reliable, scalable, and performant table storage format for the lakehouse to ensure consistent high-quality data. None of this would be possible without the contributions from the open-source community. In the span of a year, we have seen the number of downloads skyrocket from 685K monthly downloads to over 7M downloads/month. As noted in the following figure, this growth is in no small part due to the quickly expanding Delta ecosystem.

The most widely used lakehouse format in the world

All of this activity and the growth in unique contributions — including commits, PRs, changesets, and bug fixes — has culminated in an increase in contributor strength by 633% during the last three years (Source: The Linux Foundation Insights).

But it is important to remember that we could not have done this without the contributions of the community.

Credits

Saying this, we wanted to provide a quick shout-out to all of those involved with the release of Delta Lake 2.0: Adam Binford, Alkis Evlogimenos, Allison Portis, Ankur Dave, Bingkun Pan, Burak Yilmaz, Chang Yong Lik, Chen Qingzhi, Denny Lee, Eric Chang, Felipe Pessoto, Fred Liu, Fu Chen, Gaurav Rupnar, Grzegorz Kołakowski, Hussein Nagree, Jacek Laskowski, Jackie Zhang, Jiaan Geng, Jintao Shen, Jintian Liang, John O’Dwyer, Junyong Lee, Kam Cheung Ting, Karen Feng, Koert Kuipers, Lars Kroll, Liwen Sun, Lukas Rupprecht, Max Gekk, Michael Mengarelli, Min Yang, Naga Raju Bhanoori, Nick Grigoriev, Nick Karpov, Ole Sasse, Patrick Grandjean, Peng Zhong, Prakhar Jain, Rahul Shivu Mahadev, Rajesh Parangi, Ruslan Dautkhanov, Sabir Akhadov, Scott Sandre, Serge Rielau, Shixiong Zhu, Shoumik Palkar, Tathagata Das, Terry Kim, Tyson Condie, Venki Korukanti, Vini Jaiswal, Wenchen Fan, Xinyi, Yijia Cui, Yousry Mohamed.

We’d also like to thank Nick Karpov and Scott Sandre for their help with this post.

How can you help?

We’re always excited to work with current and new community members. If you’re interested in helping the Delta Lake project, please join our community today through many forums, including GitHub, Slack, Twitter, LinkedIn, YouTube, and Google Groups.

Join the community today



Source link


The following post originally appeared on Medium. The author, Ruchi Pakhle, participated in our LFX Mentorship program this past spring.

Hey everyone!
I am Ruchi Pakhle currently pursuing my Bachelor’s in Computer Engineering from MGM’s College of Engineering & Technology. I am a passionate developer and an open-source enthusiast. I recently graduated from LFX Mentorship Program. In this blog post, I will share my experience of contributing to Open Horizon, a platform for deploying container-based workloads and related machine learning models to compute nodes/clusters on edge.

I have been an active contributor to open-source projects via different programs like GirlScript Summer of Code, Script Winter of Code & so on.. through these programs I contributed to different beginner-level open-source projects. After almost doing this for a year, I contributed to different organizations for different projects including documentation and code. On a very random morning applications for LFX were opened up and I saw various posts on LinkedIn among that posts one post was of my very dear friend Unnati Chhabra, she had just graduated from the program and hence I went ahead and checked the organization that was a fit as per my skill set and decided to give it a shot.

I was very interested in DevOps and Cloud Native technologies and I wanted to get started with them but have been procrastinating a lot and did not know how to pave my path ahead. I was constantly looking for opportunities that I can get my hands on. And as Open Horizon works exactly on DevOps and Cloud Native technologies, I straight away applied to their project and they had two slots open for the spring cohort. I joined their element channel and started becoming active by contributing to the project, engaging with the community, and also started to read more about the architecture and tried to understand it well by referring to their youtube videos. You can contribute to Open Horizon here.

Linux Foundation opens LFX mentorship applications thrice a year: one in spring, one in summer, and the winter cohort, each cohort being for a span of 3 months. I applied to the winter cohort for which the applications opened up around February 2022 and I submitted my application on 4th February 2022 for the Open Horizon Project. I remember there were three documents mandatory for submitting the application:

1. Updated Resume/CV

2. Cover Letter

(this is very very important in terms of your selection so cover everything in your cover letter and maybe add links to your projects, achievements, or wherever you think they can add great value)

The cover letter should cover these points primarily👇

  • How did you find out about our mentorship program?
  • Why are you interested in this program?
  • What experience and knowledge/skills do you have that are applicable to this program?
  • What do you hope to get out of this mentorship experience?

3. A permission document from your university stating they have no obligation over the entire span of the mentorship was also required (this depends on org to org and may not be asked as well)

The LFX acceptance mail was a major achievement for me as at that period of time I was constantly getting rejections and I had absolutely no idea about how things were gonna work out for me. I was constantly doubting myself and hence this mail not only boosted my confidence but also gave me a ray of hope of achieving things by working hard towards it consistently. A major thanks to my mentor, Joe Pearson, and Troy Fine for believing in me and giving me this opportunity.⭐

Starting off from the day I applied to the LFX until getting selected as an LFX Mentee and working successfully for over 3 months and a half, it felt surreal. I have been contributing to open-source projects and organizations before. But being a part of LFX gave me such a huge learning curve and a sense of credibility and ownership that I got here wouldn’t have gotten anywhere else.

I have been contributing to open-source projects and organizations before. But being a part of LFX gave me such a huge learning curve and a sense of credibility and ownership that I got here wouldn’t have gotten anywhere else.

I still remember setting up the mgmt-hub all-in-one script locally and I thought it was just a cakewalk, well it was not. I literally used to try every single day to run the script but somehow it would end up giving some errors, I used to google them and apply the results but still, it would fail. But one thing which I consistently did was share my progress regularly with my mentor, Troy no matter if the script used to fail but still I used to communicate that with Troy, I would send him logs and he used to give me some probable solutions for the same but still the script used to fail. I then messaged in the open-horizon-examples group and Joe used to help with my doubts, a huge thanks to him and Troy for helping me figure out things patiently. After over a month on April 1st, the script got successfully executed and then I started to work on the issues assigned by Troy.

These three months taught me to be consistent no matter what the circumstances are and work patiently which I wouldn’t have learned in my college. This experience would no doubt make me a better developer and engineer along with the best practices followed. A timeline of my journey has been shared here.

  1. Checkout my contributions here
  2. Checkout open-horizon-services repo

The LFX Mentorship Program was a great great experience and I did get a great learning curve which I wouldn’t have gotten any other way. The program not only encourages developers to kick-start their open-source journey but also provides some great perks like networking, and learning from the best minds. I would like to thank my mentors Joe Pearson, Troy Fine, and Glen Darling because without their support and patience this wouldn’t have been possible. I would be forever grateful for this opportunity.

Special thanks to my mentor Troy for always being patient with me. These kind words would remain with me always although the program would have ended.

And yes how can I forget to plug in the awesome swags, special thanks, and gratitude to my mentor Joe Pearson for sending me such cool swags and this super cool note ❤handwritten thank you note from joe pearson

If you have any queries, connect with me on LinkedIn or Twitter and I would be happy to help you out 😀





Source link


Completo visionaries headline the premier open source event in Europe to share on OSS adoption in Europe, driving the circular economy, finding inspiration through the pandemic, supply chain security and more.

SAN FRANCISCO, August 4, 2022 —  The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the keynote speakers for Open Source Summit Europe, taking place September 13-16 in Dublin, Ireland. The event is being produced in a hybrid format, with both in-person and supuesto participation available, and is co-located with the Hyperledger Completo Forum, OpenSSF Day, Linux Kernel Maintainer Summit, KVM Forum, and Linux Security Summit, among others.

Open Source Summit Europe is the leading conference for developers, sys admins and community leaders – to gather to collaborate, share information, gain insights, solve technical problems and further innovation. It is a conference umbrella, composed of 13 events covering the most important technologies and issues in open source including LinuxCon, Embedded Linux Conference, OSPOCon, SupplyChainSecurityCon, CloudOpen, Open AI + Data Forum, and more. Over 2,000 are expected to attend.

2022 Keynote Speakers Include:

  • Hilary Carter, Vice President of Research, The Linux Foundation
  • Bryan Che, Chief Strategy Officer, Huawei; Cloud Native Computing Foundation Governing Board Member & Open 3D Foundation Governing Board Member
  • Demetris Cheatham, Senior Director, Diversity, Inclusion & Belonging Strategy, GitHub
  • Gabriele Columbro, Executive Director, Fintech Open Source Foundation (FINOS)
  • Dirk Hohndel, Chief Open Source Officer, Cardano Foundation
  • ​​Ross Mauri, Militar Manager, IBM LinuxONE
  • Dušan Milovanović, Health Intelligence Architect, World Health Organization
  • Mark Pollock, Explorer, Founder & Collaborator
  • Christopher “CRob” Robinson, Director of Security Communications, Product Assurance and Security, Intel Corporation
  • Emilio Salvador, Head of Standards, Open Source Program Office, Google
  • Robin Teigland, Professor of Strategy, Management of Digitalization, in the Entrepreneurship and Strategy Division, Chalmers University of Technology; Director, Ocean Data Factory Sweden and Founder, Peniche Ocean Watch Initiative (POW)
  • Linus Torvalds, Creator of Linux and Git
  • Jim Zemlin, Executive Director, The Linux Foundation

Additional keynote speakers will be announced soon. 

Registration (in-person) is offered at the price of US$1,000 through August 23. Registration to attend virtually is $25. Members of The Linux Foundation receive a 20 percent discount off registration and can contact events@linuxfoundation.org to request a member discount code. 

Health and Safety
In-person attendees will be required to show proof of COVID-19 vaccination or provide a negative COVID-19 test to attend, and will need to comply with all on-site health measures, in accordance with The Linux Foundation Code of Conduct. To learn more, visit the Health & Safety webpage.

Event Sponsors
Open Source Summit Europe 2022 is made possible thanks to our sponsors, including Diamond Sponsors: AWS, Google and IBM, Platinum Sponsors: Huawei, Intel and OpenEuler, and Gold Sponsors: Cloud Native Computing Foundation, Codethink, Docker, Mend, NGINX, Red Hat, and Styra. For information on becoming an event sponsor, click here or email us.

Press
Members of the press who would like to request a press pass to attend should contact Kristin O’Connell.

ABOUT THE LINUX FOUNDATION
Founded in 2000, the Linux Foundation and its projects are supported by more than 2,950 members. The Linux Foundation is the world’s leading home for collaboration on open source software, hardware, standards, and data. Linux Foundation projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, ONAP, Hyperledger, RISC-V, and more. The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users, and solution providers to create sustainable models for open collaboration. For more information, please visit us at https://linuxfoundation.org/

The Linux Foundation Events are where the world’s leading technologists meet, collaborate, learn and network in order to advance innovations that support the world’s largest shared technologies.

Visit our website and follow us on Twitter, LinkedIn, and Facebook for all the latest event updates and announcements.

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. 

###

Media Contact
Kristin O’Connell
The Linux Foundation
koconnell@linuxfoundation.org





Source link


This llamativo article appeared on the LF Public Health project’s blog.

The past three years have redefined the practice and management of public health on a integral scale. What will we need in order to support innovation over the next three years?

In May 2022, ASTHO (Association of State and Territorial Health Officials) held a forward-looking panel at their TechXPO on public health innovation, with a specific focus on public-private partnerships. Jim St. Clair, the Executive Director of Linux Foundation Public Health, spoke alongside representatives from MITRE, Amazon Web Services, and the Washington State Department of Health.

Three concepts appeared and reappeared in the panel’s discussion: reimagining partnerships; sustainability and governance; and design for the future of public health. In this blog post, we dive into each of these critical concepts and what they mean for open-source communities.

Reimagining partnerships

The TechXPO panel opened with a discussion on partnerships for data modernization in public health, a trending topic at the TechXPO conference. Dr. Anderson (MITRE) noted that today’s public health projects demand “not just a ‘public-private’ partnership, but a ‘public-private-community-based partnership’.” As vaccine rollouts, digital applications, and environmental health interventions continue to be deployed at scale, the need for community involvement in public health will only increase.

However, community partnerships should not be viewed as just another “box to check” in public health. Rather, partnerships with communities are a transformative way to gain feedback while improving usability and effectiveness in public-health interventions. As an example, Dr. Anderson referenced the successful VCI (Vaccination Credential Initiative) project, mentioning “When states began to partner to provide data… and offered the chance for individuals to provide feedback… the more eyeballs on the data, the more accurate the data was.”

Cardea, an LFPH project that focuses on digital identity, has also benefited from public-private-community-based partnerships. Over the past two years, Cardea has run three community hackathons to test interoperability among other tools that use Cardea’s codebase. Trevor Butterworth, VP of Cardea’s parent company, Indicio, explained his thoughts on community involvement in open source: “The more people use an open source solution, the better the solution becomes through stress testing and innovation; the better it becomes, the more it will scale because more people will want to use it.” Cardea’s public and private-sector partnerships also include Indicio, SITA, and the Aruba Health Department, demonstrating the potential for diverse stakeholders to unite around public-health goals.

Community groups are also particularly well-positioned to drive innovation in public health: they are often attuned to pressing issues that might be otherwise missed by institutional stakeholders. One standout example is the Institute for Exceptional Care (IEC), a LFPH member organization focused on serving individuals with intellectual and developmental disabilities, “founded by health care professionals, many driven by personal experience with a disabled loved one.” IEC recently presented a webinar on surfacing intellectual and developmental disabilities in healthcare data: both the webinar and Q&A showcased the on-the-ground knowledge of this deeply involved, solution-oriented community.

Sustainability and governance

Sustainability is at the heart of every viable open source project, and must begin with a complete, consensus-driven strategy. As James Daniel (AWS) mentioned in the TechXPO panel, it is crucial to determine “exactly what a public health department wants to accomplish, [and] what their goals are” before a solution is put together. Defining these needs and goals is also essential for long-term sustainability and governance, as mentioned by Dr. Umair Shah (WADOH): “You don’t want a scenario where you start something and it stutters, gets interrupted and goes away. You could even make the argument that it’s better to not have started it in the first place.”

Questions of sustainability and project direction can often be answered by bringing private and public interests to the same table before the project starts. Together, these interests can determine how a potential open-source solution could be developed and used. As Jim St. Clair mentioned in the panel: “Ascertaining where there are shared interests and shared values is something that the private sector can help broker.” Even if a solution is ultimately not adopted, or a partnership never forms, a frank discussion of concerns and ideas among private- and public-sector stakeholders can help clarify the long-term capabilities and interests of all stakeholders involved.

Moreover, a transparent discussion of public health priorities, questions, and ideas among state governments, private enterprises, and nonprofits can help drive forward innovation and improvements even when there is no specific project at hand. To this end, LFPH hosts a public Slack channel as well as weekly Technical Advisory Council (TAC) meetings in which we host new project ideas and presentations. TAC discussions have included concepts for event-driven architecture for healthcare data, a public health data sharing mesh, and “digital twins” for informatics and research.

Design for the future of public health

Better partnerships, sustainability, and governance provide exciting prospects for what can be accomplished in open-source public health projects in the coming years. As Jim St. Clair (LFPH) mentioned in the TechXPO panel: “How do we then leverage these partnerships to ask ‘What else is there about disease investigative technology that we could consider? What other diseases, what other challenges have public health authorities always had?’” These challenges will not be tackled through closed source solutions—rather, the success of interoperable, open-source credentialing and exposure notifications systems during the pandemic has shown that open-source has the upper hand when creating scalable, successful, and international solutions.

Jim St. Clair is not only optimistic about tackling new challenges, but also about taking on established challenges that remain pressing: “Now that we’ve had a crisis that enabled these capabilities around contact tracing and notifications… [they] could be leveraged to expand into and improve upon all of these other traditional areas that are still burning concerns in public health.” For example, take one long-running challenge in United States healthcare: “Where do we begin… to help drive down the cost and improve performance and efficiency with Medicaid delivery? … What new strategies could we apply in population health that begin to address cost-effective care-delivery patient-centric models?”

Large-scale healthcare and public-health challenges such as mental health, communicable diseases, diabetes—and even reforming Medicaid—will only be accomplished by consistently bringing all stakeholders to the table, determining how to sustainably support projects, and providing transparent value to patients, populations and public sector agencies. LFPH has pursued a shared vision around leveraging open source to improve our communities, carrying forward the same resolve as the diverse groups that originally came together to create COVID-19 solutions. The open-source journey in public health is only beginning.



Source link


This post originally appeared on the Academy Software Foundation’s (ASWF) blog. The ASWF works to increase the quality and quantity of contributions to the content creation industry’s open source software saco. 

Tell us a bit about yourself – how did you get your start in visual effects and/or animation? What was your major in college?

I started experimenting with the BASIC programming language when I was 12 years old on a ZX81Neville Spiteri Sinclair home computer, playing a game called “Falta Lander” which ran on 1K of RAM, and took about 5 minutes to load from cassette tape.

I have a Bachelor’s degree in Cognitive Science and Computer Science.

My first job out of college was a Graphics Engineer at Wavefront Technologies, working on the precursor to Maya 1.0 3D animation system, still used today. Then I took a Digital Artist role at Digital Domain.

What is your current role?

Co-Founder / CEO at Wevr. I’m currently focused on Wevr Virtual Studio – a cloud platform we’re developing for interactive creators and teams to more easily build their projects on game engines.

What was the first film or show you ever worked on? What was your role?

First film credit: True Lies, Digital Artist.

What has been your favorite film or show to work on and why?

TheBlu 1.0 digital ocean platform. Why? We recently celebrated TheBlu 10 year anniversary. TheBlu franchise is still alive today. At the core of TheBlu was/is a creator platform enabling 3D interactive artists/developers around the world to co-create the 3D species and habitats in TheBlu. The app itself was a mostly decentralized peer-to-peer simulation that ran on distributed computers with fish swimming across the Internet. The core tenets of TheBlu 1.0 are still core to me and Wevr today, as we participate more and more in the evolving Metaverse.

How did you first learn about open source software?

Linux and Python were my best friends in 2000.

What do you like about open source software? What do you dislike?

Likes: Transparent, voluntary collaboration.

Dislikes: Nothing.

What is your vision for the Open Source community and the Academy Software Foundation?

Drive international awareness of the Foundation and OSS projects.

Where do you hope to see the Foundation in 5 years?

A integral leader in best practices for real-time engine-based production through international training and education.

What do you like to do in your free time?

Read books, listen to podcasts, watch documentaries, meditation, swimming, and efoiling!

Follow Neville on Twitter and connect on LinkedIn.  





Source link


By Ashwin Ramaswami

Last month, we just concluded the Linux Foundation’s 2022 Open Source Summit North America (OSS NA), when developers, technologists, and community leaders from industry, corporación, and government converged in Austin, Texas, from June 21-24 to talk about all things open source. Participants and speakers highlighted open source innovation and efforts to ensure a sustainable open source ecosystem.

What did the summit tell us about the state of OSS security? Several parts of the conference addressed different aspects of this issue – OpenSSF Day, Critical Software Summit, SupplyChainSecurityCon, and the Completo Security Vulnerability Summit. Overall, the summit demonstrated an increased emphasis on open source security as a community effort with various stakeholders. More ambitious and innovative approaches to handling the open source security problem – including collaboration, tools, and training – were also introduced. Finally, the summit highlighted the importance for open source users to give back to the community and contribute upstream to the projects they depend on.

Let’s explore these ideas in more detail!

Click on the list on the upper right of this video to view the entire OpenSSF Day playlist (13 videos)

Open source security as a community effort

Open source security is not just an isolated effort by users or maintainers of open source software. As OSS NA showed, the stakes of open source security have turned it into a community effort, where a wide variety of diverse stakeholders have an interest and are beginning to get involved.

  • As Todd Moore (IBM) mentioned in his keynote, incidents such as log4shell have made open source security a bigger priority for governments – and it is important for existing open source stakeholders, both users and maintainers, to work as a community to take a cohesive message back to the government to articulate our community’s needs and how we are responding to this challenge.
  • Speakers at a panel discussion with the Atlantic Council’s Cyber Statecraft Initiative and the Open Source Security Foundation (OpenSSF) discussed the summit held by OpenSSF in Washington, DC on May 12 and 13, where representatives from industry and government met to develop the Open Source Software Security Mobilization Plan, a $150 million plan for better securing the open source ecosystem.
  • A panel discussion explored how major businesses are working together to improve the security of the open source supply chain, particularly through the governance structure of the OpenSSF.

New approaches to address open source security

OSS NA featured several initiatives to address fundamental open source security issues, many of which were particularly ambitious and innovative.

  • The OpenSSF’s Alpha-Omega Project was announced to address software vulnerabilities for OSS projects that are most critical (alpha) and at the long tail (omega).
  • Eric Brewer (Google) gave a keynote discussing the fundamental problem of ensuring accountability in the open source software supply chain. One way of solving this is through curation: creating a repository of vetted and secure packages.
  • Standards continue to be important, as always: Art Manion (CERT/CC) discussed the history and future of the CVE Program, while Jennings Aske (New York-Presbyterian Hospital) and Melba Lopez (IBM) discussed the importance of a Software Bill of Materials (SBOM).
  • The importance of security tooling was emphasized, with discussions on tools such as sigstore, automation of security checks through Infrastructure as Code tools, and CI/CD pipelines.
  • David Wheeler (Linux Foundation) discussed how education in secure software development is critical to ensuring open source software security. Courses like the OpenSSF’s Secure Software Development Fundamentals Courses are available to help developers learn this topic.

Giving back to the community

Participants at the summit recognized that open source security is ultimately a matter of community, governance, and sustainability. Projects that don’t have the right resources or governance structure may not be able to ensure their projects are secure or accept the right funding to do so.

  • Steve Hendrick (Linux Foundation) and Matt Jarvis (Snyk) discussed the release of the 2022 State of Open Source Security report from Snyk and the Linux Foundation. The report noted that open source software is often a one-way street where users see significant benefits with minimal cost or investment. It is recommended that organizations need to close the loop and give back to OSS projects they use for larger open source projects to meet user expectations.
  • Aeva Black (Microsoft) discussed approaches to community risk management through drafting and enforcing a code of conduct, and how ignoring community health can lead to sometimes catastrophic technical outcomes for OSS Projects.
  • Sean Goggins (CHAOSS) discussed the relationship between community health and vulnerability mitigation in open source projects by using metrics models from the CHAOSS projects.
  • Margaret Tucker and Justin Colannino (GitHub) discussed the role that package registries have in open source security, beginning to formulate some principles that would comprobación these registries’ responsibility for safety and reliability with the freedom and creativity of package maintainers.
  • Naveen Srinivasan (Endor Labs) and Laurent Simon (Google) explored the OpenSSF Scorecard to more easily analyze the security of open source projects and proactively improve their security.
  • Amir Montazery (OSTIF) discussed the Open Source Technology Improvement Fund’s efforts to help OSS maintainers to work with security experts to improve their projects’ security posture.

Conclusion

In sum, the talks and conversations at OSS Summit NA help paint a picture of how key stakeholders in the open source software ecosystem – OSS communities, industry, corporación, and government – are thinking about conceptualizing big-picture issues and directing efforts around OSS security.

But these initiatives and talks still have a lot of room for input! Whether individually or through your institution, consider adding your voice to this discussion as we continue to support the open source software community. Join an OpenSSF working group, another initiative, or contribute upstream to open source projects that you depend on.



Source link


You may have tried many types of C functions while executing the C codes in the Linux platform. These functions can be doing some input and output operations as most functions usually do. One of those 2 C functions is the Open() function. The Open() function in the C programming language opens a file in the specified path or directory. If the specified file indicated in the code does not exist at the specific location, this function may throw an exception or may create it on the specified location/path if certain flags are passed. We can conclude that the open function is valuable equally for reading and writing. So, we cover the use of the Open 2 C function within our Ubuntu 20.04 platform along with some examples.

Syntax
The syntax of the Open() function in the C language is given below. Let’s discuss its parameters:

int open (const char* path, int flags [, int mode ]);

Path

Path is the title of the file that you would like to open or create. It also refers to the file’s location. If we are not working in the same directory as the file, we can provide an absolute path that begins with “/”. We can alternatively specify a relative path where, in some cases, we just mention the file name and extension.

Flags

To utilize the flags, here is the list with their respective explanations:

  • O_RDONLY: In read-only mode, open the file.
  • O_WRONLY: In a write-only mode, open the file
  • O_RDWR: Open the file in reading and write mode
  • O_CREAT: This flag is applied to create a file if it doesn’t exist in the specified path or directory
  • O_EXCL: Prevents the file creation if it already exists in the directory or location.

Here, O stands for Open function.

Header File/Library

The following library or header file is used in the code for this function usage.

To create or open a file in that certain directory or path, use the VIM Editor. The “openFile.c” is the name of the file that we created. When we type this command, the editor opens the file in editing mode, permitting us to type the lines of code in the file. To close the VIM editor and save the file, press the escape key, type a colon (:) and x, then press the enter key.

The following lines of code are typed into the “openFile.c” file. We use a relative path to open the “testopen.txt” file in the following code. The O_RDONLY (read only) and O_CREAT flags were passed (create the “testopen.txt” file if it does not exist in the current directory).

The printf function is now used to display the return value in the file descriptor. We then verify if the file descriptor is equal to -1, which indicates that the open file failed and prints the error.

We make use of the GCC compiler to assemble the file. If you don’t have the GCC C-Compiler installed, run the following commands to get it. Simply execute the subsequent instruction in the terminal to see the GCC Compiler version on your Linux-Ubuntu system:

sudo apt update
sudo apt install build-essential

Type the following command to compile the “openFile.c” in the GCC Compiler. The following command includes the GCC compiler. Next, specify the file that we want to compile along with the extension and the -o flag (used to output the file to particular object file which is specified right after this flag):

gcc openFile.c –o openFile.out

Alternatively, we can run the command before the –o flag, which produces an “a.out” object file in the current directory by default. Using the list directory command, check the output or object file, i.e. openFile.out.

Type the following command to execute or run the output or object file, which displays the file descriptor equal to 3. It indicates that the provided file (testopen.txt) is present in the directory containing the output file.

Open the C file with the VIM Editor merienda more, but this time, modify the file name (openFile1.txt) in the open function. Then, save and close the “openFile.c” file.

Another change in the open command is passing the O_RDONLY flag which opens the “openFile1.txt” in read-only mode. It means that we can only read the data of the file. We cannot perform the write or update function in that specified file.

Compile the file again to update the output file. After that, run the code using the object file. Since we do not have the specified text file in the current directory, the use of the open() function has thrown an error and returns a -1 which is stored in the fileDescriptor variable of the integer type. The following screen displays the output of the openFile. If the output file has not been specified, simply type “./a.out” in the terminal to see the file’s output.

We opened the “openFile.c” file in the VIM editor merienda more and used the O_EXCL flag in the open command. It implies that if the specified file does not exist in the directory, do not create it; if it does, simply open it. Because there is no “openFile1.txt” file in the list directory, the open method returns an error.

The following screen demonstrates that we do not have the given file in the path, and the open function returns -1. This indicates that no such file or directory exists. If the command for the output file is typed incorrectly, it returns the generic error – “no such file or directory”.

Conclusion

This article is about the use of the Open 2 C function in the Kali Linux system. Using this system call, we discussed how it can be used to open and read the file and its contents easily. We discussed how it throws an error when the file descriptor doesn’t find the required file.



Source link


This post originally appeared on the Hyperledger Foundation’s blog. You can read the full case study here

Some years ago, researchers realized that IoT devices would need to buy and sell from one another. In this “Economy of Things,” the items to be traded will include power, data, and connectivity. Most transactions will be fast, low value, and high frequency.

For a company like The Bosch Group that’s active in everything from autonomous vehicles to thermal plants, the Economy of Things will touch many lines of business. That’s why, in 2017, the company’s advanced research group, Bosch Research, was looking to find a way to scale up blockchain transactions to support the Economy of Things.

Bosch set out to do meet that requirement by leveraging a specific, step-by-step open source strategy for developing new markets:

  1. Identify a requirement
  2. Set goals
  3. Consider the terrain
  4. Build a partnership
  5. Pick a suitable license
  6. Use open source archetypes

The goals were to lead an effort to create standards for the Economy of Things and to build a framework where different partners could work together.

A survey for likely partners led the Bosch team to Perun, an early layer-2 protocol that passes state information off-chain through potencial channels. Bosch joined forces with several academics to implement this protocol and start creating an ecosystem.

As part of the process, Perun needed a stable home where everyone could access the latest code, and other people could find it. Hyperledger Labs provides a space where developments can be started without the overhead of creating an official Hyperledger project.

In Q3 2020, Perun was welcomed into Hyperledger Labs, and development has continued with work from the team at Boch and PolyCrypt GbmH, a startup spun out of the Technical University Darmstadt, where much of the academic research behind Perun began.

The Bosch team was eager to talk about its approaches and contributions to Hyperledger Foundation. To that end, they worked with Hyperledger marketing and others in the Perun community on a case study that details not only the business and technology challenges they’ve set out to tackle but also the strategic way they are leveraging open source development to advance the industry for all.

We never know what technology will turn into the Next Big Thing.

Perhaps Perun will be one of them, powering billions of micropayments between IoT devices or enabling people to shop with Central Bank Digital Currencies (CBDCs) that are still on the drawing board today.

Read the full case study here.



Source link