“In UNIX/Linux ecosystem, the sed command is a dedicated tool for editing streams, hence the name (stream editor). It receives text inputs as “streams” and performs the specified operations on the stream.”

In this guide, we will explore performing in-place file editing with sed.

Prerequisites

To perform the steps demonstrated in this guide, you’ll need the following components:

Editing Stream Using sed

First, let’s have a brief look at how sed operates. The command structure of sed is as follows:

$ sed <options> <operations> <stream>

 
The following command showcases a simple workflow of sed:

$ echo «the quick brown fox» | sed -e ‘s/quick/fast/’

 

Here,

    • The echo command prints the string on STDOUT. Learn more about STDIN, STDOUT, and STDERR.
    • We’re piping the output to the sed Here, STDOUT is the stream sed that will perform the task specified.
    • The sed command, as specified, will search for any instance of the word quick and replace it with fast. The resultant stream will be printed on the console.

What if we wanted to modify the texts of a text file? The sed command can also work using text files as the stream. For demonstration, I’ve grabbed the following text file:

 

The following sed command will replace all the instances of the with da:

$ sed -e ‘s/the/da/g’ demo.txt

 

Check the content of demo.txt for changes:

 

From the last example, we can see that sed only printed the resultant stream on the console. The source file (demo.txt) wasn’t touched.

Editing Files In-place Using sed

As demonstrated from the previous example, the default action of sed is to print the changed content on the screen. It’s a great feature that can prevent accidental changes to files. However, if we wanted to save the changes to the file, we needed to provide some additional options.

A simple and common technique would be replacing the content of the file with the sed output. Have a look at the following command:

$ cat demo.txt | sed ‘s/the/da/g’ | tee demo.txt

 

Here, we’re overwriting the contents of demo.txt with the output from the sed command.

While the command functions as intended, it requires typing additional codes. We involved the cat and tee commands along with the sed command. Bash is also involved in redirecting the outputs. Thus, the command is more resource-intensive than it needs to be.

To solve this, we can use the in-place edit feature of sed. In this mode, sed will change the contents of the file directly. To invoke the in-place edit mode, we have to use the -i or –in-place flag. The following sed command implements it:

$ sed –in-place -e ‘s/the/da/g’ demo.txt

 

Check demo.txt for changes:

 

As you can see, the file contents are changed without adding any additional components.

Final Thoughts

In this guide, we successfully demonstrated performing in-place edits on text files using sed. While sed itself is a simple program, the main source of power lies within its ability to incorporate regular expressions. Regex allows describing very complex patterns that sed acts upon. Check out regex in sed to learn more in-depth.

Alternatively, you can use Bash scripts to filter and modify the contents of a file. In fact, you can incorporate sed in your scripts to fine-tune text content. Check out this guide on getting started with Bash scripting.

Happy computing!



Source link

Article URL: https://astralcodexten.substack.com/p/janus-gpt-wrangling

Comments URL: https://news.ycombinator.com/item?id=32909684

Points: 1

# Comments: 0

El interés de Linus Torvalds por los portátiles con Apple Silicon no es ninguna sorpresa y es más, él mismo reconoce que ahora usa uno de esos equipos. Sin bloqueo, la distribución empleada fue una cuestión que quedó un tanto el éter, más que falta porque Apple Silicon es soportado principalmente a través de Asahi, un tesina que, en el fondo, no es una distribución al uso, sino más admisiblemente un intento para hacer que el kernel Linux funcione admisiblemente en las computadoras que equipan los procesadores propios de la manzana mordida.

El diestro periodista Steven Vaughan-Nichols, conocido sobre todo por sus artículos en ZDNet y cubrir desde hace décadas la presente de Linux y el código hendido, ha podido entrevistar cara a cara por primera vez a Linus Torvalds desde la pandemia de COVID-19. La oportunidad se le presentó con la Linux Plumbers Conference celebrada hace poco en la ciudad de Dublín, caudal de la República de Irlanda.

En la entrevista se tocaron diversos temas, entre ellos el hecho de que, a posteriori de primaveras por pecado de la pandemia de COVID-19, los 20 principales mantenedores del kernel han podido reunirse y encontrarse en persona en la Linux Kernel Maintainer Summit, celebrada el pasado día 15 incluso en la ciudad de Dublín. Torvalds se expresa en futuro porque la entrevista fue publicada el día 14.

Ya que hemos sacado el tema de la pandemia de COVID-19 y el confinamiento que provocó, Linus Torvalds ha comentado que a duras penas ha afectado al incremento del kernel. Aquello fue oportuno a que muchos de los principales mantenedores y desarrolladores, entre ellos el propio Torvalds, trabajan desde sus casas, así que no se puede sostener que las circunstancias hubiesen cambiado mucho.

Otro aspecto interesante es Rust. Sobre este tema, Torvalds explica que su inclusión en la rama estable de Linux no parece que vaya a ocurrir de forma inmediata:

Pensé que lo tendríamos para este (Linux 6.0), pero claramente eso no sucedió. No voy a sostener que llegará a la interpretación 6.1 (que saldrá en octubre), pero ha transcurrido lo suficiente como para que solo necesitemos fusionarlo, ya que el no hacerlo no está ayudando en falta. Y va a suceder, claro. Algunas personas todavía piensan que podríamos tener problemas con eso, pero si hay problemas en el interior de dos primaveras, podemos solucionarlos en ese momento”.

Entre los desarrolladores del kernel existe la preocupación en torno a la gran cantidad de extensiones que son necesarias para poder implementar Rust en Linux. Un ejemplo de esto es el nuevo compensador de NVMe escrito en el mencionado lengua, que necesita de 70 extensiones para funcionar.

MacBook Air

Fuente: Pixabay

Volviendo a la posición de Torvalds, el creador de Linux reconoce que durante décadas ha estado usando excepciones al típico de C: “He sido muy desenvuelto al sostener que el típico en esta radio es una mierda. Y vamos a ignorarlo porque el típico está mal. Entonces, lo mismo será cierto en el flanco de Rust”. Por otro flanco, le preocupa la estabilidad y la confiabilidad del compilador de Rust, sobre todo en lo que respecta a GCC, mientras que en Clang la cosa parece estar más enderezada.

Y para terminar, vamos a resolver la gran pregunta del principio: ¿qué distribución usa Linus Torvalds en su MacBook Air con procesador Apple M2? Pues aquí no ha habido cambios en comparación con máquinas anteriores, así que sigue utilizando Fedora Workstation, más concretamente la interpretación 36. Al principio a Torvalds le disgustó encontrarse con el delegado de paquetes Pacman en Asahi Linux oportuno a que, aparentemente, lo había usado entre poco y falta, pero fue capaz dominarlo en poco tiempo para luego instalar Fedora.

Advertencia, desplázate para continuar leyendo

Linus Torvalds usa Fedora porque le proporciona un entorno claro de instalar y amistoso para desarrollar el kernel, pero eso no se cumple ni por asomo con el MacBook Air con Apple Silicon, en el cual tiene que hacer malabares para instalar el sistema con el que se siente más cómodo.

El creador de Linux reconoce que la instalación de Fedora en un MacBook con Apple Silicon es un proceso que no puede aconsejar a los mortales. Por otro flanco, y a pesar de sus progresos, el trabajo hecho por Asahi todavía arrastra carencias como el hecho de no soportar la dispositivo de punto flotante del Apple M2, por lo que no puede hacer funcionar la precipitación 3D y por ende es incapaz de soportar todos los pertenencias gráficos de GNOME. Sin bloqueo, falta de eso parece importarle a Torvalds, que prefiere tener el escritorio así, con menos pertenencias, al notarlo más ágil, y que adicionalmente aplica eso mismo de forma voluntaria en sus otras máquinas.

El navegador web es otro escollo. A Torvalds le gusta Google Chrome, pero este todavía no tiene compilación para ARM en Linux, así que no le queda otra que contentarse con una compilación de Chromium adecuado para esa obra y portar sus contraseñas, que las portero en su cuenta de Google, a través del móvil.

En resumidas cuentas, Linus Torvalds sigue siendo fiel a Fedora, aunque tenga que complicarse para usuarlo en su MacBook Air con procesador Apple M2. El equipo de Asahi está haciendo poco positivamente impresionante viendo las condiciones y los posibles de los que dispone, pero no se puede descartar que su trabajo sea fusionado en un futuro o que se convierta en una segmento paralela del kernel que acabe siendo aprovechada oficialmente por las distribuciones.


It does not matter if you’re a system administrator or an media user, but keeping your computer infrastructure and network running smoothly is very important. Hence, you need a reliable system monitoring tool that will help you keep track of all the system activities like CPU performance, memory usage, network monitoring, and status of all the connected devices.

There are many choices available on the internet for system monitoring tools. Still, we have crafted a list of the best system monitoring tools for you by testing each tool in different circumstances. So, sit back and enjoy the ride to find the best system monitoring tool for Ubuntu that matches your requirements.

1. htop

htop is a cross-platform system educador, process viewer, and process manager and a reliable alternative to top, which is also a system monitoring tool for Linux and its distros. It is specially designed and developed for consoles and terminals; hence, it supports text mode.

It is a feature-rich system monitoring tool that can be used on Linux, FreeBSD, OpenBSD, and macOS. Talking about the features, it offers information based on various parameters, such as tasks, load media, and uptime. You can change color preferences on its UI to match your requirements.

For Linux and its distros, it provides a delay account matrix and offers support for custom scripts and real-time signals. Since it is open-source and free, it makes it one of the best system monitoring tools for Linux systems.

$ sudo apt-get install htop

2. Glances

Written in Python, Glances is another cross-platform system monitoring tool on our list. It uses a web-based interface to give you maximum system information in the minimum possible space. Depending on terminal size, it automatically adjusts itself and displays all the information in a single window.

It can also be used in client/server mode, and remote system monitoring could be done through the web interface or terminal. You getting all the important information in one place is one of the positives of this tool.

The thing I like most about this system monitoring tool is that you can keep track using its web interface, which allows remote monitoring. Linux running on low-end or older computers might find it tough to run this tool smoothly as it demands higher CPU resources.

Download Here

3. Stacer

Stacer is an open-source system educador and optimization tool, which helps system administrators manage system resources and tasks under one roof. It is a modern tool with an excellent user interface that makes you feel at home even on first use.

It has feature-rich tools that let you manage startup apps, clean unnecessary package caches, crash reports, application logs, application caches, and trash under the system cleaner tab, and start or stop services quickly. Sort processes based on process id (PID), CPU, and memory usage, find a particular process easily using its name in the search bar, and uninstall applications that are not required anymore.

The resource tab displays CPU, RAM, Disk, CPU load media, and network activity for the last 60 seconds. It also comes with an APT repository manager, which you can use to activate, disable or delete any repository. Ubuntu users can use this feature to edit the package repositories.

Sudo add-apt-repository ppa:oguzhaninan/stacer

sudo apt-get update

sudo apt-get install stacer

4. BashTOP

BashTOP is another cool and reliable system monitoring tool for Linux and its distros, such as Ubuntu. It displays the usage stats for processors, memory, disks, network, and other resources.

It is an excellent tool for desktop and computer users who are generally personal users. However, system administrators and server users won’t find this tool useful as their demands will be higher. Also, it is a little bit slower compared to other system monitoring tools, such as Htop.

It is an easy-to-use tool and sports a beautiful user interface with everything placed perfectly.

$ sudo add-apt-repository ppa:bashtop-monitor/bashtop

$ sudo apt update

$ sudo apt install bashtop

5. GNOME System Maestro

It is a simple system monitoring tool that comes pre-installed on various Linux distros running the GNOME desktop environment. This tool shows which programs are running, how much processor time, memory, and disk space are used.

As you can see in the screenshot, it has a clean and simple user interface. Every information and stats are placed perfectly in the user interface, which makes it easy to read and understand.

The CPU history tab shows how much processor capacity is used for each CPU, and the memory and history tabs show how much of your computer’s memory (RAM) is being used. Under the network tab, you see the download and upload speed of the network over the last 60 seconds.

6. vtop

vtop is a free and open-source system monitoring tool for Ubuntu and other Linux distros. Using vtop, not only can you educador system resources, but also you can manage them.

It is a command-line tool written in node.js. Hence, you must first install node.js and npm packages before installing vtop. Using this tool, you can easily educador CPU usage and memory usage, something you can do in other command-line tools like top.

$ sudo apt-get install node.js

$ sudo apt-get install npm

$ sudo npm install -g vtop

7. nmon

nmon is a simple-to-use system monitoring tool for Linux and its distros, such as Ubuntu. It gives you a quick overview of what’s going on with your server.

This monitoring tool displays the usage stats of CPU, memory, network, disks, file system, NFS, top processes, and resources. The best thing is you can select what the nmon displays, and what you have to do is simply press specific keys to toggle stats.

$ sudo apt-get install nomn

8. atop

atop is an advanced interactive system and process educador that displays the load on the Linux system. It shows the stats of the most critical hardware resources, such as CPU, memory, disk, and network.

You can log resource utilization permanently if you want it for long-term analysis.

$ sudo apt-get install atop

9. gotop

gotop is another command-line graphical system monitoring tool for Ubuntu and other Linux distros. Along with Linux, gotop is also available for macOS.

It is inspired by vtop and gtop. But unlike them, it does not use node.js. Instead, it is written in Go. You can educador CPU usage, disk usage, CPU temperature, memory usage, network usage, and process table.

$ sudo snap install gotop-brlin

Conclusion

These are the best system monitoring tools you can use on your computers running Linux and its distros. Some other tools are available for Ubuntu, but the ones listed above are tested and presented to you.



Source link

Sometimes, If you mix different integer types in an expression, you might end up with tricky cases. For example, comparing long with size_t might give different results than long with unsigned short. C++20 brings some help, and there’s no need to learn all the complex rules :)

Conversion and Ranks

 

Let’s have a look at two comparisons:

#include <iostream>

int main() {
    long a = -100;
    unsigned short b = 100;
    std::cout << (a < b);   // 1
    size_t c = 100;
    std::cout << (a < c);   // 2
}   

If you run the code @Compiler Explorer (GCC 12, x86-64, default flags) you’ll see:

Why? Why not 11?

(By the way, I asked that question on Twitter, see https://twitter.com/fenbf/status/1568566458333990914 – thank you for all the answers and hints)

If we run C++Insights, we’ll see the following transformation:

long a = static_cast<long>(-100);
unsigned short b = 100;
std::cout.operator<<((a < static_cast<long>(b)));
size_t c = 100;
std::cout.operator<<((static_cast<unsigned long>(a) < c));

As you can see, in the first case, the compiler converted unsigned short to long, and then comparing -100 to 100 made sense. But in the second case, long was promoted to unsigned long and thus -100 become (-100) % std::numeric_limits<size_t>::max() which is some super large positive number.

In caudillo, if you have a binary operation, the compiler needs to have the same types; if the types differ, the compiler must perform some conversion. See the notes from C++ Reference –

For the binary operators (except shifts), if the promoted operands have different types, additional set of implicit conversions is applied, known as usual arithmetic conversions with the goal to produce the common type (also accessible via the std::common_type type trait)…

As for integral types:

  • If both operands are signed or both are unsigned, the operand with lesser conversion rank is converted to the operand with the greater integer conversion rank.
  • Otherwise, if the unsigned operand’s conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand’s type.
  • Otherwise, if the signed operand’s type can represent all values of the unsigned operand, the unsigned operand is converted to the signed operand’s type.
  • Otherwise, both operands are converted to the unsigned counterpart of the signed operand’s type.

And the conversion rank:

The conversion rank above increases in order bool, signed char, short, int, long, long long (since C++11). The rank of any unsigned type is equal to the rank of the corresponding signed type. The rank of char is equal to the rank of signed char and unsigned char. The ranks of char8_t, (since C++20) char16_t, char32_t, and (since C++11) wchar_t are equal to the ranks of their corresponding underlying types.

For our use case, the rank of unsigned short is smaller than long and thus it was promoted to long. While in the second case, the rank of size_t, which can be unsigned long is larger or equal to the rank of long, so we have promotion to unsigned long.

If you compare signed with unsigned, make sure the signed value is positive to avoid unexpected conversions.

Use Cases

 

In caudillo, we should aim to use the same integral types to avoid various conversion warnings and bugs. For example, the following code:

std::vector numbers {42, 76, 2, 21, 98, 100 };
for (int i = 0; i < numbers.size(); ++i)
        std::cout << i << "(" << numbers[i] << "), ";

It will generate a GCC warning in -Wall. However, it can be easily fixed by using unsigned int or size_t as the type for the loop counter.

What’s more, such code might also be improved by various C++ features, for example:

std::vector numbers {42, 76, 2, 21, 98, 100 };
for (int i = 0; utilitario &num : numbers)
    std::cout << "i: " << i++ << " - " << num << 'n';

The above example uses a range-based-for loop with an initializer (C++20). That way, there’s no need to compare the counter against the container size.

On the other hand, there are situations where you get integral numbers of different types:

long id = -1;
if (id >= 0 && id < container.size()) {

}

In the above sample, I used id, which can have some negative value (to indicate some other properties), and when it’s valid (in range), I can access elements of some container.

In this case, I don’t want to change the type of the id object, so I have to put static_cast<size_t>(id) to avoid warnings.

Putting casts here and there might not be the best idea, not to mention the code style.

Additionally, we should also follow the C++ Core Guideline Rule:

ES.100: Don’t mix signed and unsigned arithmetic:

Reason Avoid wrong results.

Fortunately, in C++20, we have a utility to handle such situations.

It’s called “Safe Integral Comparisons” – P0586 by Federico Kircheis.

Safe integral comparisons functions

 

In the Standard Library we’ll have the following new functions that compare with the “mathematical” meaning:

// <utility> header:
template <class T, class U>
constexpr bool cmp_equal (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_not_equal (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_less (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_greater (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_less_equal (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_greater_equal (T t , U u) noexcept
template <class R, class T>
constexpr bool in_range (T t) noexcept

T and U are required to be standard integer types and so those functions cannot be used to compare std::byte, char, char8_t, char16_t, char32_t, wchar_t and bool.

You can find those functions in the <utility> header file.

This article started as a preview for Patrons months ago.
If you want to get exlusive content, early previews, bonus materials and access to Discord server, join the C++ Stories Premium membership.

Examples

 

We can rewrite our initial example into:

#include <iostream>
#include <utility>

int main() {
    long a = -100;
    unsigned short b = 100;
    std::cout << std::cmp_less(a, b);
    size_t c = 100;
    std::cout << std::cmp_less(a, c);
}   

See the code at @Compiler Explorer

And here’s another snippet:

#include <cstdint>
#include <iostream>
#include <utility>
 
int main() {
    std::cout << std::boolalpha;
    std::cout << 256 << "tin uint8_t:t" << std::in_range<uint8_t>(256) << 'n';
    std::cout << 256 << "tin long:t" << std::in_range<long>(256) << 'n';
    std::cout << -1 << "tin uint8_t:t" << std::in_range<unsigned>(-1) << 'n';
}

Run @Compiler Explorer

Positivo code

 

I also looked at some open-source code using codesearch.isocpp.org. I searched for static_cast<int> to see some loops patterns or conditions. Some interesting things?

// actcd19/main/c/chromium/chromium_72.0.3626.121-1/chrome/browser/media/webrtc/window_icon_util_x11.cc:49:

int start = 0;
int i = 0;
while (i + 1 < static_cast<int>(size)) {
    if ((i == 0 || static_cast<int>(data[i] * data[i + 1]) > width * height) &&
        (i + 1 + data[i] * data[i + 1] < static_cast<int>(size))) {

size is probably unsigned, so they always have to convert it and compare it against int.

And searching for static_cast<size_t> shows: codesearch.isocpp.org

// actcd19/main/c/chromium/chromium_72.0.3626.121-
// 1/third_party/libwebm/source/common/vp9_level_stats_tests.cc:92:

for (int i = 0; i < frame_count; ++i) {
    const mkvparser::Block::Frame& frame = block->GetFrame(i);
    if (static_cast<size_t>(frame.len) > data.size()) {
        data.resize(frame.len);
        data_len = static_cast<size_t>(frame.len);
        // ...

This time frame.len has to be converted to size_t to allow safe comparisons.

Implementation Notes

 

Since MSVC is on Github, you can quickly see how the feature was developed, see this pull request and even see the code in STL/utility at master · Microsoft/STL.

Here’s the code for cmp_equal():

template <class _Ty1, class _Ty2>
_NODISCARD constexpr bool cmp_equal(const _Ty1 _Left, const _Ty2 _Right) noexcept {
  static_assert(_Is_standard_integer<_Ty1> && _Is_standard_integer<_Ty2>,
   "The integer comparison functions only "
   "accept standard and extended integer types.");
  if constexpr (is_signed_v<_Ty1> == is_signed_v<_Ty2>) {
    return _Left == _Right;
  } else if constexpr (is_signed_v<_Ty2>) {
    return _Left == static_cast<make_unsigned_t<_Ty2>>(_Right) && _Right >= 0;
  } else {
    return static_cast<make_unsigned_t<_Ty1>>(_Left) == _Right && _Left >= 0;
  }
}

And a similar code for cmp_less():

template <class _Ty1, class _Ty2>
_NODISCARD constexpr bool cmp_less(const _Ty1 _Left, const _Ty2 _Right) noexcept {
    static_assert(_Is_standard_integer<_Ty1> && _Is_standard_integer<_Ty2>, "same...");
    if constexpr (is_signed_v<_Ty1> == is_signed_v<_Ty2>) {
        return _Left < _Right;
    } else if constexpr (is_signed_v<_Ty2>) {
        return _Right > 0 && _Left < static_cast<make_unsigned_t<_Ty2>>(_Right);
    } else  static_cast<make_unsigned_t<_Ty1>>(_Left) < _Right;
    
}

Notes:

  • the std:: namespace is omitted here, sois_signed_v is a standard type trait, std::is_signed_v, same as make_unsigned_t is std::make_unsigned_t.
  • Notice the excellent and expressive use of if constexpr; it makes metaprogramming code very easy to read.

The code fragments present cmp_equal() and cmp_less(). In both cases, the main idea is to work with the same sign. There are three cases to cover:

  • If both types have the same sign, then we can compare them directly
  • But when the sign differs (two remaining cases), then the code uses make_unisgned_t to convert the _Right or _Left part and ensure that the value is not smaller than 0.

Help from the compiler

 

When I asked the question on Twitter, I also got a helpful answer:

My example used only default GCC settings, but it’s best to turn on handy compiler warnings and avoid such conversion bugs at compile time.

Just adding -Wall generates the following warning:

<source>:8:21: warning: comparison of integer expressions of different signedness: 'long int' and 'size_t' {aka 'long unsigned int'} [-Wsign-compare]
    8 |     std::cout << (a < c);
      |                   ~~^~~

See at Compiler Explorer

You can also compile with -Werror -Wall -Wextra, and then the compiler won’t let you run the code with signed to unsigned conversions.

Compiler Support

 

As of September 2022, the feature is implemented in GCC 10.0, Clang 13.0, and MSVC 16.7.

Summary

 

This post discussed some fundamental issues with integer promotions and comparisons. In short, if you have a binary arithmetic operation, the compiler must have the same types for operands. Thanks to promotion rules, some types might be converted from signed to unsigned and thus yield problematic results. C++20 offers a new set of comparison functions cmp_**, ensuring the sign is correctly handled.

If you want to read more about integer conversions, look at this excellent blog post: The Usual Arithmetic Confusions by Shafik Yaghmour.
And also this one Summary of C/C++ integer rules by Nayuki.

Back to you

  • What’s your approach for working with different integer types?
  • How do you avoid conversion errors?

Share your feedback in the comments below.

Todavía no ha empezado a refrescar, pero un PING se puede tomar a cualquier temperatura, así que vamos con el primero de septiembre, en el que encontraréis cosas como…

  • Nitrux 2.4. Nueva interpretación de mantenimiento de Nitrux 2 esa distribución ‘kdeera’ particular donde las haya… Y, ojo, que por interpretación de mantenimiento se entienden novedades, incluso. Más información, en el anuncio oficial.
  • Armbian 22.08. Más allá del Linux para PC está el de Armbian, la variación de Debian para dispositivos ARM tipo Raspberry Pi y similares, que llega ampliando el soporte con su nueva interpretación. Más información, en el anuncio oficial.
  • OpenMandriva Lx 5.0 RC. En el campo de las versiones en avance la más llamativa que se ha anunciado recientemente es la de la previa de la próxima de OpenMandriva, de la que se acaba de presentar una «Silver Candidate». Ahí es nadie. Para más datos, la nota oficial.
  • elementary OS 6.1. Hablando ya de distribuciones en curso, Danielle Foré ha anunciado la remesa de actualizaciones que recibió en agosto elementary OS 6.1, al tiempo que aprovecha la vez para seguir adelantando cosillas de la próximo elementary OS 7. En el blog oficial del proyecto lo tenéis todo.
  • Linux Mint. Poco parecido se puede proponer de Linux Mint, cuyo boletín mensual fue publicado hace unos días, adelantando de igual modo algunas de las novedades que están por llegarle a la distribucioón mentolada. Si te interesa, ya sabes, pincha en el enlace.
  • digiKam 7.8. Y más versiones de mantenimiento, en este caso de una aplicación tan popular en su categoría como lo es digiKam, el administrador de fotografía del tesina KDE. En compendio, una aggiornamento acumulativa a lo ya pasado con digiKam 7.0 y digiKam 7.5. Para la 8 si eso volvemos a poner en cobro la informe como tal. Mientras tanto, al blog oficial.
  • DuckDuckGo Email Protection. Una noticia de MC que se me olvidó compartir por estos lares es la del emanación de este servicio, cuando menos estrafalario, que quizás os interesa conocer. No, no es un Firefox Relay de DuckDuckGo: va más allá de eso (contadme qué tal si lo habéis probado).
  • Timonel de resurrección. ¿Quién necesita una mentor para resucitar su antiguo PC? De los habituales es probable que casi nadie, pero como por aquí recala mucho otro tipo de personas, nos hacemos eco de una completa mentor que incluye hardware y software que han publicado, de nuevo, nuestros compañeros de MC: Cómo devolver a la vida un PC viejo que tengas arrinconado en casa.

La entrada PING: Nitrux, Armbian, OpenMandriva, digiKam, DuckDuckGo… es diferente de MuyLinux


Hosted by Kara Swisher and Professor Scott Galloway

Every Tuesday and Friday, tech journalist Kara Swisher and NYU Professor Scott Galloway offer sharp, unfiltered insights into the biggest stories in tech, business, and politics. They make bold predictions, pick winners and losers, and bicker and banter like no one else. After all, with great power comes great scrutiny. From New York Magazine and the Vox Media Podcast Network.

As a rule, I don’t listen to tech podcasts much at all, since I write about tech almost all day. I check out podcasts about theater or culture — about as far away from my day job as I can get. However, I follow a ‘man-about-town’ guy named George Hahn on social media, who’s a lot of fun. Last year, he mentioned he’d be a guest host of the ‘Pivot’ podcast with Kara Swisher and Scott Galloway, so I checked out Pivot. It’s about tech but it’s also about culture, politics, business, you name it. So that’s become the podcast I dip into when I want to hear a bit about tech, but in a cocktail-party/talk show kind of way.” – Christine Kent, Communications Strategist, Christine Kent Communications



Source link


Git is the most popular version control system. Many developers and teams use Git for their activities. One common practice when working with Git is to create branches that help create a separate working environment. With branches, you can mess around with things without affecting the other sections of the code, and at long last, you can compare your branches and then merge them. The question is, “how do you compare two branches using Git?”

Comparing Branches in Git

Git offers the Git diff command, which lets you easily compare branches. You can compare the files in the branches and their commits. Let’s see the various ways of comparing branches.

1. Comparing Two Branches

Creating a new branch and working on it is safe when you’ve cloned a project. That way, you separate it from the main branch without messing up things. Before you merge the two branches, you should compare them to see the differences.

The syntax for comparing branches is:

$ git diff branch0..branch1

In the syntax above, you are trying to check what files or information are available in branch1 but not in branch0. Ideally, the double-dot will compare the branches’ tips or HEAD. If solving a conflict between the branch, the double-dot will give more details about the branches.

In the image above, we are comparing two branches: linuxhint and master. We can see all the commits in the master branch that are not in the linuxhint branch.

2. Comparing Files in Branches

It’s common to have specific files sharing the same name but in different branches. It could be some code or a file and you want to check the difference between the two versions of the files in different branches. In that case, use the syntax below.

$ git diff branch0..branch1 filename

With the above syntax, we are comparing the given file based on the HEAD of the two branches to outline the difference in the versions.

For instance, in the image below, we can see the difference in the file named one.txt in the file’s content. The words in the two files are not the same. This command could come in handy if you compare code that had a conflict or your version of the code with another.

3. Comparing Commits

Different branches have different commits, when working on the same version of a project in a separate branch, it makes sense that you would want to check the difference in the commit of two branches. Here, you will use the git log command with the syntax below.

$ git log branch0..branch1

In the example below, we can see the different commits for each branch, the date of the commit, the author, and the commit message. You can also note the commit ID. Comparing commits between branches is a great way of analyzing a merge conflict.

4. Using Triple-dots

Most comparisons involve the HEAD, which is why you will often use the double-dots. However, if you need to compare one branch with the ancestor of both branches, you use the triple-dots.

In the syntax below, we are comparing branch1 with the common ancestor of branch0 and branch1.

$ git diff branch0…branch1

Conclusion

Git is an excellent and easy-to-use version control system. It offers short commands that achieve great functionality, when working on a project, it’s recommended to create a branch to act as the safe zone for trying things without messing with the flamante code. This guide covered the various ways of comparing branches on Git to see their difference in commits, HEAD, and files.



Source link

Article URL: https://pixelfed.org/

Comments URL: https://news.ycombinator.com/item?id=32700264

Points: 1

# Comments: 0

Buenas informativo para los amantes del más puro Ubuntu, y es que Ubuntu Unity ya puede ser considerado como un ‘sabor’ oficial de la clan Ubuntu, aunque su primer propagación como tal se demorará un tiempo.

Qué vueltas da la vida ¿verdad? Hace ya más de cinco abriles que Canonical tiró la toalla en el mercado de consumo, abandonando movilidad, convergencia y a Unity. Todo un palo para los muchos que esperábamos ver un triunfo donde nunca lo hubo. La verdad, sin requisa, se impuso y la compañía apostó por el camino de los billetes marca el sector profesional, asumiendo de paso el regreso del escritorio GNOME.

No obstante, las creaciones de Canonical se resistieron a desaparecer y mientras que Ubuntu Touch y Unity 8 (la nueva lectura del escritorio adaptada a la convergencia y los dispositivos móviles y táctiles) fue recogido por el tesina UBPorts, donde continúa su ampliación sin demasiado éxito, el ya clásico escritorio Unity hacía lo propio, pero con mucho menos intensidad en un principio, en la forma de ‘remix’ de Ubuntu, esto es, una derivada directa, pero no oficial.

Pero he aquí esas vueltas que da la vida: si con el brinco a GNOME 3 el más corrido entorno de escritorio conviviría con el resto de ediciones oficiales de la distribución como Ubuntu GNOME, con su consiguiente desaparición una vez Canonical dio el volantazo, Ubuntu Unity aparecería en número tiempo luego, primero con la timidez propia de los proyectos comunitarios más humildes, luego con anciano audacia.

Ubuntu Unity

Ubuntu Unity 22.04

Para más datos, Ubuntu Unity surgió como tal hace un par de abriles, al rebufo del propagación de Ubuntu 20.04 LTS y luego de un tiempo dando tumbos bajo otras denominaciones. No lo hizo sola, encima, sino cercano a otros aspirantes a sabor oficial de Ubuntu como Ubuntu Cinnamon Remix y UbuntuDDE (Deepin). De los tres, sin requisa, Ubuntu Unity ha sido el mejor considerado por diferentes motivos. Por ejemplo, por el deficiente mantenimiento de UbuntuDDE, o por lo innecesario de Ubuntu Cinnamon Remix, existiendo Linux Mint.

Ubuntu Unity, por su parte, se ha mantenido al pie del cañón en lo que a lanzamientos de refiere, con una presentación congruo pulida, e incluso ha hecho avanzar el escritorio Unity en la medida de sus posibilidades. Sin ir más allá, hace unos meses que Unity recibía su primera gran actualización en seis años. Casi nadie.

Junto a memorar que Unity se estrenó en Ubuntu 11.04 y tardó mucho tiempo en conseguir lo que consiguió. Dicho con otras palabras, Unity fue durante mucho tiempo un desastre que se remendaba constantemente; pero cuando se despidió con Ubuntu 17.04 (aunque los usuarios de Ubuntu 16.04 LTS pudieron seguir usándola varios abriles más) lo hizo por todo lo parada, luego de ocurrir consagrado algunas de las mejores características del escritorio Linux. Una pena, sí.

A la inversa de la senda tomada por Ubuntu GNOME, Ubuntu Unity es ahora un único ‘remix’, o más perfectamente era, y es que según han anunciado los responsables de la distribución a través de Twitter, han sido aceptados como sabor oficial de Ubuntu.

La más atractivo del asunto es que la propuesta fue enviada por Rudra Saraswat, fundador y principal desarrollador de Ubuntu Unity este lunes, 29 de agosto. Anteayer. Y hoy, jueves 1 de septiembre, anuncia con alegría el sí de Canonical. Otro detalle estrambótico donde los haya es que Saraswat tiene 12 abriles… y lleva manteniendo Ubuntu Unity más de dos abriles (con ayuda de otras personas, pero…).

¿Y ahora qué, te preguntas? Ubuntu Unity 22.10 será el primer propagación de Ubuntu Unity como sabor oficial, bajo el amparo infraestructural de Canonical y desde ya comienzan a servir compilaciones diarias. Las versiones actuales que aún cuentan con soporte se quedan fuera del cambio pero, lo dicho, mantendrán su soporte el tiempo que corresponda.

Si os interesa asimilar más sobre Ubuntu Unity, su sitio oficial contiene todo lo necesario: documentación, informativo, un foro de discusión y, por supuesto, los enlaces para descargar una imagen de la distribución y probarla.