Archive d’étiquettes pour : Place


“In UNIX/Linux ecosystem, the sed command is a dedicated tool for editing streams, hence the name (stream editor). It receives text inputs as “streams” and performs the specified operations on the stream.”

In this guide, we will explore performing in-place file editing with sed.

Prerequisites

To perform the steps demonstrated in this guide, you’ll need the following components:

Editing Stream Using sed

First, let’s have a brief look at how sed operates. The command structure of sed is as follows:

$ sed <options> <operations> <stream>

 
The following command showcases a simple workflow of sed:

$ echo «the quick brown fox» | sed -e ‘s/quick/fast/’

 

Here,

    • The echo command prints the string on STDOUT. Learn more about STDIN, STDOUT, and STDERR.
    • We’re piping the output to the sed Here, STDOUT is the stream sed that will perform the task specified.
    • The sed command, as specified, will search for any instance of the word quick and replace it with fast. The resultant stream will be printed on the console.

What if we wanted to modify the texts of a text file? The sed command can also work using text files as the stream. For demonstration, I’ve grabbed the following text file:

 

The following sed command will replace all the instances of the with da:

$ sed -e ‘s/the/da/g’ demo.txt

 

Check the content of demo.txt for changes:

 

From the last example, we can see that sed only printed the resultant stream on the console. The source file (demo.txt) wasn’t touched.

Editing Files In-place Using sed

As demonstrated from the previous example, the default action of sed is to print the changed content on the screen. It’s a great feature that can prevent accidental changes to files. However, if we wanted to save the changes to the file, we needed to provide some additional options.

A simple and common technique would be replacing the content of the file with the sed output. Have a look at the following command:

$ cat demo.txt | sed ‘s/the/da/g’ | tee demo.txt

 

Here, we’re overwriting the contents of demo.txt with the output from the sed command.

While the command functions as intended, it requires typing additional codes. We involved the cat and tee commands along with the sed command. Bash is also involved in redirecting the outputs. Thus, the command is more resource-intensive than it needs to be.

To solve this, we can use the in-place edit feature of sed. In this mode, sed will change the contents of the file directly. To invoke the in-place edit mode, we have to use the -i or –in-place flag. The following sed command implements it:

$ sed –in-place -e ‘s/the/da/g’ demo.txt

 

Check demo.txt for changes:

 

As you can see, the file contents are changed without adding any additional components.

Final Thoughts

In this guide, we successfully demonstrated performing in-place edits on text files using sed. While sed itself is a simple program, the main source of power lies within its ability to incorporate regular expressions. Regex allows describing very complex patterns that sed acts upon. Check out regex in sed to learn more in-depth.

Alternatively, you can use Bash scripts to filter and modify the contents of a file. In fact, you can incorporate sed in your scripts to fine-tune text content. Check out this guide on getting started with Bash scripting.

Happy computing!



Source link


Open source communities are driven by a mutual interest in collaboration and sharing around a common solution. They are filled with passion and energy. As a result, today’s world is powered by open source software, powering the Internet, databases, programming languages, and so much more. It is revolutionizing industries and tackling the toughest challenges. Just check out the projects fostered here at the Linux Foundation for a peek into what is possible. 

What is the challenge? 

As the communities and the projects they support grow and mature, active community engagement to recruit, mentor, and enable an active community is critical. Organizations are now recognizing this as they are more and more dependent on open source communities. Yet, while the ethos of open source is transparency and collaboration, the tool chain to automate, visualize, analyze, and manage open source software production remains scattered, siloed, and of varying quality.

How do we address these challenges?

And now, involvement and engagement in open source communities goes beyond software developers and extends to engineers, architects, documentation writers, designers, Open Source Program Office professionals, lawyers, and more. To help everyone stay coordinated and engaged, a centralized source of information about their activities, tooling to simplify and streamline information from multiple sources, and a solution to visualize and analyze key parameters and indicators is critical. It can help: 

  • Organizations wishing to better understand how to coordinate internal participation in open source and measure outcomes
  • CTOs and engineering leads looking to build a cohesive open source strategy 
  • Project maintainers needing to wrangle the admitido and operational sides of the project
  • Individual keeping track of their open source impacts

Enter the Linux Foundation’s LFX Platform – LFX operationalizes this approach, providing tools built to facilitate every aspect of open source development and empowers projects to standardize, automate, analyze, and self-manage while preserving their choice of tools and development workflows in a vendor-neutral platform.

LFX tools do not disrupt a project’s existing toolchain but rather integrate a project’s community tools and ecosystem to provide a common control plane with APIs from numerous distributed data sources and operations tools. It also adds intelligence to drive outcome-driven KPIs and utilizes a best practices-driven, vendor-agnostic tools chain. It is the place to go for active community engagement and open source activity, enabling the already powerful open source movement to be even more successful.

How does it work? 

Much of the data and information that makes up the open source universe is, not surprisingly, open to see. For instance, GitHub and GitLab both offer APIs that allow third-parties to track all activity on open projects. Social media and public chat channels, blog posts, documentation, and conference talks are also easily captured. For projects hosted at a foundation, such as the Linux Foundation, there is an opportunity to aggregate the public and semi-private data into a privacy respecting, opt-in unified data layer. 

More specifically to an organization or project, LFX is modular, desplegable, and API-driven. It is pluggable and can easily integrate the data sources and tools that are already in use by organizations rather than force them to change their work processes. For instance:

  • Source control software (e.g. Git, GitHub, or GitLab)
  • CI/CD platforms (e.g. Jenkins, CircleCI, Travis CI, and GitHub Actions)
  • Project management (e.g. Jira, GitHub Issues)
  • Registries  (e.g. Docker Hub)
  • Documentation  (e.g. Confluence Wiki)
  • Marketing automation (e.g. social media and blogging platforms)
  • Event management platforms (e.g. physical event attendance, speaking engagements, sponsorships, webinar attendance, and webinar presentations)

This holistic and configurable view of projects, organizations, foundations, and more make it much easier to understand what is happening in open source, from the most granular to the universal. 

What do real-world users think? 

Part of LFX is a community forum to ask questions, share solutions, and more. Recently, Jessica Wagantall shared about the Open Network Automation Platform (ONAP). She notes:

ONAP is part of the LF Networking umbrella and consists of 30+ components working together towards the same goal since 2017. Since then, we have faced situations where we have to evaluate if the components are getting enough support during release schedules and if we are identifying our key contributors to the project.

In this time, we have learned a lot as we grow, and we have had the chance to have tools and resources that we can rely on every step of the way. One of these tools is LFX Insights.

We rely on LFX Insights tools to guide the internal decisions and keep the project growing and the contributions flowing.

LFX Insights has become a potent tool that gives us an overview of the project as well as statistics of where our project stands and the changes that we have encountered when we evaluate release content and contribution trends.

Read Jessica’s full post for some specific examples of how LFX Insights helps her and the whole team. 

John Mertic is a seasoned open source project manager. One of his jobs currently is helping to manage the Academy Software Foundation. John shares: 

The Academy Software Foundation was formed in 2018 in partnership with the Academy of Motion Pictures Arts and Sciences to provide a vendor-neutral home for open source software in the visual effects and motion picture industries.

A challenge this industry was having was that there were many key open source projects used in the industry, such as OpenVDB, OpenColorIO, and OpenEXR, that were cornerstones to production but lacked developers and resources to maintain them. These projects were predominantly single vendor owned and led, and my experience with other open source projects in other verticals and horizontal industries causes this situation, which leads to sustainability concerns, security issues, and lack of future development and innovation.

As the project hit its 3rd anniversary in 2021, the Governing Board was wanting to assess the impact the foundation has had on increasing the sustainability of these projects. There were three primary dimensions being assessed.

We at the LF know that seeing those metrics increasing is a good sign for a healthy, sustainable project.

Academy Software Foundation projects use LFX Insights as a tool for measuring community health. Using this tool enabled us to build some helpful charts which illustrated the impacts of being a part of the Academy Software Foundation.

We took the approach of looking at before and after data on the contributor, contribution, and contributor diversity.

Here is one of the charts that John shared. You can view all of them on his post



Source link