The title of this article little clears out the purpose of using the fgets function in C. The fgets() function in C is mainly designed to get the data input from a user or an input stream like file and display it within the output console. To get the input from a user to display on the console, we must have some buffer memory or array in which we can store that input. Using this function, we can restrict the number of characters to be displayed from an input stream and avoid displaying the excess data and display only the needed ones. This guide covers some C examples to explain the use of the fgets() function in detail.

The system update is a must before performing any sort of coding on it as it caters to all the memory-related issues and makes your system feature full. Thus, the “update” keyword with the “apt” utility and “sudo” privilege is a must. After adding this query to your Linux shell, it would require the passcode of a currently logged-in user.

If the Linux system already has a C compiler configured in it, you don’t need to add it again. If it’s not configured, you will face problems while executing the code. Thus, install it by utilizing the “apt” utility within the “install” instruction followed by the “gcc” utility name.

Before having a clear look at the C example for the fgets function, we have to create a “c” file. The “fgets.c” file that we create can be seen in the list of files and folders of the current “home” directory after using the “ls” query after creating it with the “touch” query.

After the file has been successfully created, we are going to open it in Linux’s “GNU Nano” editor. Therefore, we tried the displayed command in the shell.

Example 1:

We perform the first example of C to utilize the fgets function to get the input data from the user at run time and display a specific range of characters from it on the shell. Here is the C code that is used to get the data from a user. This code uses the standard input/output header for the use of the standard input and output streams like stdio.h. Before using the main() function, we define a MAX variable with the value 20 that is used as a range. The main() method is used here for the overall functionality.

The character type array “A” of size “MAX” is declared and the printf() function of C is used to display the “Input: ” string on the shell to ask the user for an input. The fgets() function of C is called here by passing it an “A” array, MAX variable, and “stdin” standard input output mutable as arguments to get the input from a user. This input is saved to the “A” array up to “MAX” length. A total of 20 characters is stored and the rest is discarded.

The printf() statement then uses the “A” array to display those 20 characters from the input. The return 0 statement ends the program smoothly after the execution. Save it before this code execution.

#include <stdio.h>

#define MAX 20

int main() {

  char A[MAX];

printf(«Input: «);

fgets(A, MAX, stdin);

printf(«Data: %sn«, A);

return 0;

}

After saving our C fgets() function code to its file, we exit the “nano” editor with the “Ctrl+X” and apply the “gcc” command to the “fgets.c” file to make it compiled. Nothing happened, so we tried the “./a.out” query to run the program. The “Input” string appeared along with the text area to get an input from us.

We add a single line of more than 30 characters and hard-press the “Enter” key. The inputted data is displayed up to the first 20 characters and the rest is discarded.

Example 2:

Within our new C illustration, we will demonstrate the use of the fgets() function to get the text data from the input file stream and display it on the console. Therefore, the main structure of this code is very similar to the previous example. The main() function starts with the declaration of a file descriptor “f” of pointer type by utilizing the built-in FILE object. The character type array “A” of size 30 is declared and the fopen() function of C is called to read the input stream “fgets.txt” from the system.

The returned value is saved to the file descriptor NULL. The “if” statement checks if the value of “f” is NULL or something else. If the value is “NULL”, it throws an error using the “perror” function of C. Otherwise, the “else-if” part of the statement is executed which is called the “fgets” function.

The purpose of using the fgets() method is to get 30 characters from an input stream “f”, save them to the “A” array, and check if the input is not NULL. When the whole input is not NULL, the puts() function of C is called here to display the whole input of 30 characters at the console while passing it the “A” array as an argument. The stream is closed using the fclose() function.

;
 f = fopen(«fgets.txt», «r»);
if(f == NULL) {
perror(«Error!»);
return(1); }
 else if(fgets(A, 30, f)!=NULL) {
    puts(A); }
fclose(f);
 return 0;
 }

Before compiling our code, we show you the text data within the fgets.txt file (that is used as an input stream) with the “cat” display query. It shows the 2 lines of the text data. Now, the compilation and execution take place. We get a single first line at the output area. This is because we choose to only get 30 characters from the input file stream.

Conclusion

While concluding this article, we are a hundred percent sure that you will not regret taking the help from our article when you want to learn some basics about the fgets() function of C. We discussed about how you can utilize this function to get the standard input from the user at runtime and how you can get the data from a file stream and display it in the output area.



Source link

If you’ve ever tried Deno Deploy, our multi-tenant distributed
JavaScript cloud, you may understand some of these glowing reviews:

these deno deploy times are incredible

deno deploy is the simplest

deno deploys are so fast

deno deploy was so smooth and frictionless

(And if you haven’t,
deploy a Fresh site today!)

Achieving this deployment speed was no mistake. Designing Deno Deploy was an
opportunity to re-imagine how we want the developer experience of deploying an
app to be, and we wanted to focus on speed and security. Instead of deploying
VMs or containers, we decided on
V8 isolates,
which allows us to securely run untrusted code with significantly less overhead.

This blog post explains what V8 isolates are and outlines the major
architectural pieces of the Deno Deploy isolate cloud.

Deno Deploy Projects and Deployments

Within Deno Deploy, users and organizations manage their applications as
projects. Deploy projects can be linked to GitHub repositories, enabling
code to be redeployed on each git push. Projects also allow applications to
manage environment variables, provision TLS certificates, and associate domain
names. The Deploy dashboard also provides views of logs and analytics out of the
box for increased visibility into an application’s behavior.

As previously mentioned, Deploy projects can be linked to GitHub repositories so
that running applications are updated each time a change is pushed to the
backing repository. In Deploy, each of these updates is known as a
deployment. Deployments are immutable snapshots of an application. Each
deployment has a unique URL that can be used to test or otherwise interact with
the application over the Internet. A project can have many deployments, but only
one of the deployments can be tagged as the production environment at any given
time.

Figure 1 shows the Deployments dashboard for deno.land.
Each deployment shows its unique deno.dev URL, as well as the git commit
information that was used to create the deployment. The «Prod» label indicates
the deployment that is currently running in production on deno.land.

Deployments dashboard in Deno Deploy
Figure 1. Deployments dashboard in Deno Deploy.

Routing Requests to Deployments

But how do users’ HTTP requests reach a running deployment in the Deno Deploy
cloud?

Each deployment is exposed to the Internet via one or more public URLs. During
DNS resolution, all of these URLs are mapped to Deno Deploy’s public IPv4 or
IPv6 address. This ensures that all deployment traffic is routed to the Deploy
servers.

Depending on where the user is in the world, sending all traffic to the same
destination can be very inefficient. For example, if a user is located in
Australia, but all of the Deploy servers are located on the East coast of the
United States, each HTTP request, including all necessary TCP handshaking, would
need to travel thousands of miles around the globe.

Because Deno Deploy is intended to run JavaScript at the edge, it is critical
that requests are instead intelligently routed to the edge location nearest the
user. Deploy leverages anycast routing
to share the same IPv4 and IPv6 addresses among the 30+ edge locations around
the world, shown in Figure 2.

Deno Deploy's 30+ edge locations around the world
Figure 2. Deno Deploy’s 30+ edge locations around the world.

Merienda an incoming request is routed to the appropriate edge location, it’s handed
off to a runner process. As its name implies, a runner is responsible for
running applications, including receiving traffic and routing it to the correct
deployment.

Deploy maintains a mapping table between domain names and deployments to serve
this purpose. Depending on the protocol in use, the HTTP/1.1 Host header or
HTTP/2 :authority pseudo-header is used to determine the domain that the
request is intended for. Then, the TLS
Server Name Indication (SNI)
is used to determine which TLS certificate to use for the connection. Finally,
the runner checks the mapping table to determine the correct deployment to send
the request to. (As an aside, Deploy HTTP traffic is redirected to HTTPS, so the
SNI can be used reliably.)

The Runner as an Isolate Hypervisor

In a traditional cloud, the
hypervisor is responsible for
creating, running, monitoring, and destroying potencial machines. In Deploy, the
runner acts as an isolate hypervisor, serving the same purpose, but working with
processes running V8 isolates instead of potencial machines.

Before going any further, we need to understand what an isolate is.
V8, the JavaScript engine that powers Deno, Google Chrome,
and several other popular runtimes documents an isolate as:

Isolate represents an isolated instance of the V8 engine. V8 isolates have
completely separate states. Objects from one isolate must not be used in other
isolates.

This means that an isolate is a distinct execution context, including code,
variables, and everything needed to run a JavaScript application.

To help understand this, imagine a browser window with multiple tabs. Each tab
executes JavaScript for a single webpage, but the JavaScript code from each tab
generally cannot interact with the code from other tabs. In this example, each
browser tab executes JavaScript code in its own V8 isolate.

The Deno Deploy cloud takes a similar approach. Each deployment executes its own
JavaScript code in its own V8 isolate in its own process. After an incoming
request has been mapped to a deployment, the runner is responsible for passing
the request off to the deployment process. If the requested deployment is not
already running, the runner starts a new instance of the deployment first. In
order to conserve resources, the runner also terminates deployments that have
been running for a while without receiving traffic.

A high level architectural view of a runner process and deployment processes is
shown in Figure 3.

Deploy runner and isolate pool architecture.
Figure 3. Deploy runner and isolate pool architecture..

Security Considerations

Allowing many arbitrary users to upload and execute untrusted code is an
extremely difficult technical challenge to solve effectively. Layered security
is generally considered to be a good practice, but in this case it is a
necessity because there is no single security mechanism that solves the Deno
Deploy use case.

This section discusses some of the ways in which Deploy improves its security
posture.

Using a more restrictive Deno runtime. Deno takes security very seriously.
The open source Deno CLI uses its built-in
permission system to lock down what a JavaScript application can do. Deno Deploy
takes things one step farther by using a custom build of Deno that is even more
restrictive. For example, deployments can read their own static assets, but
cannot write to the file system.

Relying on V8 sandboxing. V8 was designed to run untrusted JavaScript code
in a hostile environment — the browser. While V8 isolates alone do not provide a
perfect sandbox, they do a good enough job for many use cases. V8 is also a very
high profile public project with corporate backing. This leads to constant
auditing, fuzzing, and testing of the codebase to ensure that it is secure.
Occasional
security exploits
are reported, which Google takes very seriously. The Deno team is committed to
staying up to date with V8 releases, as well as following the V8 team’s
recommendations for running untrusted code.

Running deployments in separate processes. V8 isolates are already…
isolated… from one another. Deno Deploy goes a step further and enlists the
operating system’s help to improve the isolation of each deployment.

Hypervisor monitoring of resource utilization. The runner continuously
tracks the metrics of all running deployments. This is necessary for billing
purposes, but also allows the runner to enforce resource utilization quotas.
Deployments that consume too many resources are terminated to prevent service
degradation.

Restricting network access. Cloud providers and the operating system allow
network access to be customized and locked down. Deploy also employs separate
networks for internal control plane traffic and end user data plane traffic.

Restricting allowed system calls. As an added layer of security, Deploy uses
seccomp filtering to limit the system
calls that user code is allowed to execute.

Using Rust as the implementation language. When it comes to Deploy,
JavaScript is the language in the cloud, but Rust is the language of the
cloud. Rust enforces memory safety, eliminating an entire class of bugs.
Conveniently, Rust also provides performance on par with C/C++.

What’s Next?

Everyday, as we’re iterating and improving on Deno Deploy, more developers and
businesses choose to run production-ready applications and infrastructure with
us. We’re confident that our architectural decisions enable them to focus on
building for their users, without having to worry about security or performance.

Check out Deno Deploy and let us know what you think!

Do you find these challenges interesting to solve? We’re
hiring!

¿Cloud pública? ¿Cloud privada? ¿Cloud híbrida? Cuando, en el proceso de transformación digital de una empresa, se plantea la migración de las cargas de trabajo desde una infraestructura tradicional en torno a un maniquí basado en la aglomeración, surgen las dudas de cuál nominar, pero sobre todo de cómo impulsar y cimentar el cambio. Para calar a buen puerto, por supuesto, conviene aprender lo que se hace.

Para ello, nadie mejor que contar con la breviario adecuada. Es lo que encontrarás en Tu viaje a la nube empieza aquí, un curso regalado impartido por Matías Sosa, Product Marketing Manager de OVHcloud y doble en empresa de sistemas y virtualización, encargado de orientar a clientes y partners en la dirección de sus infraestructuras y la migración al cloud.

En este curso en vídeo, el versado de OVHcloud nos cuenta en cuatro lecciones prácticas qué ventajas el mover tus cargas de trabajo virtualizadas al cloud, de qué forma puedes hacerlo con VMware y por qué OVHCloud Hosted Private Cloud es la mejor infraestructura para hacerlo.

En el primero de los vídeos, aprenderás cómo sobrevenir de la virtualización en el centro de datos particular a la virtualización en cloud y qué ventajas puedes esperar obtener en este nuevo entorno.

En el segundo, veremos la diferencia entre extender y portar nuestros centros de datos al cloud, qué puedes esperar de los distintos proveedores y por qué VMware puede ser la alternativa ideal para este delirio.

En el tercer episodio os explicaremos qué puedes encontrar en un plan primordial de Hosted Private Cloud de OVHcloud y qué tecnologías emplea para sostener la continuidad de tu negocio. Por otra parte veremos la importancia que tienen tecnologías como vSphere, vMotion o Fault Tolerance. Finalmente, en el extremo episodio de este curso, explicaremos la forma única en la que VMware vSan se despliega en los servidores de OVHcloud y cómo tu empresa puede beneficiarse de esta configuración exclusivo.

En definitiva, de forma sencilla y a un clic de distancia ponemos en tus manos un curso completamente regalado que va a despejar las principales dudas que tengas en tu próxima migración al cloud. ¿Te animas a probarlo?¡Apúntate aquí!

Imagen: Pexels

Advertencia, desplázate para continuar leyendo


“In UNIX/Linux ecosystem, the sed command is a dedicated tool for editing streams, hence the name (stream editor). It receives text inputs as “streams” and performs the specified operations on the stream.”

In this guide, we will explore performing in-place file editing with sed.

Prerequisites

To perform the steps demonstrated in this guide, you’ll need the following components:

Editing Stream Using sed

First, let’s have a brief look at how sed operates. The command structure of sed is as follows:

$ sed <options> <operations> <stream>

 
The following command showcases a simple workflow of sed:

$ echo «the quick brown fox» | sed -e ‘s/quick/fast/’

 

Here,

    • The echo command prints the string on STDOUT. Learn more about STDIN, STDOUT, and STDERR.
    • We’re piping the output to the sed Here, STDOUT is the stream sed that will perform the task specified.
    • The sed command, as specified, will search for any instance of the word quick and replace it with fast. The resultant stream will be printed on the console.

What if we wanted to modify the texts of a text file? The sed command can also work using text files as the stream. For demonstration, I’ve grabbed the following text file:

 

The following sed command will replace all the instances of the with da:

$ sed -e ‘s/the/da/g’ demo.txt

 

Check the content of demo.txt for changes:

 

From the last example, we can see that sed only printed the resultant stream on the console. The source file (demo.txt) wasn’t touched.

Editing Files In-place Using sed

As demonstrated from the previous example, the default action of sed is to print the changed content on the screen. It’s a great feature that can prevent accidental changes to files. However, if we wanted to save the changes to the file, we needed to provide some additional options.

A simple and common technique would be replacing the content of the file with the sed output. Have a look at the following command:

$ cat demo.txt | sed ‘s/the/da/g’ | tee demo.txt

 

Here, we’re overwriting the contents of demo.txt with the output from the sed command.

While the command functions as intended, it requires typing additional codes. We involved the cat and tee commands along with the sed command. Bash is also involved in redirecting the outputs. Thus, the command is more resource-intensive than it needs to be.

To solve this, we can use the in-place edit feature of sed. In this mode, sed will change the contents of the file directly. To invoke the in-place edit mode, we have to use the -i or –in-place flag. The following sed command implements it:

$ sed –in-place -e ‘s/the/da/g’ demo.txt

 

Check demo.txt for changes:

 

As you can see, the file contents are changed without adding any additional components.

Final Thoughts

In this guide, we successfully demonstrated performing in-place edits on text files using sed. While sed itself is a simple program, the main source of power lies within its ability to incorporate regular expressions. Regex allows describing very complex patterns that sed acts upon. Check out regex in sed to learn more in-depth.

Alternatively, you can use Bash scripts to filter and modify the contents of a file. In fact, you can incorporate sed in your scripts to fine-tune text content. Check out this guide on getting started with Bash scripting.

Happy computing!



Source link

Article URL: https://astralcodexten.substack.com/p/janus-gpt-wrangling

Comments URL: https://news.ycombinator.com/item?id=32909684

Points: 1

# Comments: 0

El interés de Linus Torvalds por los portátiles con Apple Silicon no es ninguna sorpresa y es más, él mismo reconoce que ahora usa uno de esos equipos. Sin bloqueo, la distribución empleada fue una cuestión que quedó un tanto el éter, más que falta porque Apple Silicon es soportado principalmente a través de Asahi, un tesina que, en el fondo, no es una distribución al uso, sino más admisiblemente un intento para hacer que el kernel Linux funcione admisiblemente en las computadoras que equipan los procesadores propios de la manzana mordida.

El diestro periodista Steven Vaughan-Nichols, conocido sobre todo por sus artículos en ZDNet y cubrir desde hace décadas la presente de Linux y el código hendido, ha podido entrevistar cara a cara por primera vez a Linus Torvalds desde la pandemia de COVID-19. La oportunidad se le presentó con la Linux Plumbers Conference celebrada hace poco en la ciudad de Dublín, caudal de la República de Irlanda.

En la entrevista se tocaron diversos temas, entre ellos el hecho de que, a posteriori de primaveras por pecado de la pandemia de COVID-19, los 20 principales mantenedores del kernel han podido reunirse y encontrarse en persona en la Linux Kernel Maintainer Summit, celebrada el pasado día 15 incluso en la ciudad de Dublín. Torvalds se expresa en futuro porque la entrevista fue publicada el día 14.

Ya que hemos sacado el tema de la pandemia de COVID-19 y el confinamiento que provocó, Linus Torvalds ha comentado que a duras penas ha afectado al incremento del kernel. Aquello fue oportuno a que muchos de los principales mantenedores y desarrolladores, entre ellos el propio Torvalds, trabajan desde sus casas, así que no se puede sostener que las circunstancias hubiesen cambiado mucho.

Otro aspecto interesante es Rust. Sobre este tema, Torvalds explica que su inclusión en la rama estable de Linux no parece que vaya a ocurrir de forma inmediata:

Pensé que lo tendríamos para este (Linux 6.0), pero claramente eso no sucedió. No voy a sostener que llegará a la interpretación 6.1 (que saldrá en octubre), pero ha transcurrido lo suficiente como para que solo necesitemos fusionarlo, ya que el no hacerlo no está ayudando en falta. Y va a suceder, claro. Algunas personas todavía piensan que podríamos tener problemas con eso, pero si hay problemas en el interior de dos primaveras, podemos solucionarlos en ese momento”.

Entre los desarrolladores del kernel existe la preocupación en torno a la gran cantidad de extensiones que son necesarias para poder implementar Rust en Linux. Un ejemplo de esto es el nuevo compensador de NVMe escrito en el mencionado lengua, que necesita de 70 extensiones para funcionar.

MacBook Air

Fuente: Pixabay

Volviendo a la posición de Torvalds, el creador de Linux reconoce que durante décadas ha estado usando excepciones al típico de C: “He sido muy desenvuelto al sostener que el típico en esta radio es una mierda. Y vamos a ignorarlo porque el típico está mal. Entonces, lo mismo será cierto en el flanco de Rust”. Por otro flanco, le preocupa la estabilidad y la confiabilidad del compilador de Rust, sobre todo en lo que respecta a GCC, mientras que en Clang la cosa parece estar más enderezada.

Y para terminar, vamos a resolver la gran pregunta del principio: ¿qué distribución usa Linus Torvalds en su MacBook Air con procesador Apple M2? Pues aquí no ha habido cambios en comparación con máquinas anteriores, así que sigue utilizando Fedora Workstation, más concretamente la interpretación 36. Al principio a Torvalds le disgustó encontrarse con el delegado de paquetes Pacman en Asahi Linux oportuno a que, aparentemente, lo había usado entre poco y falta, pero fue capaz dominarlo en poco tiempo para luego instalar Fedora.

Advertencia, desplázate para continuar leyendo

Linus Torvalds usa Fedora porque le proporciona un entorno claro de instalar y amistoso para desarrollar el kernel, pero eso no se cumple ni por asomo con el MacBook Air con Apple Silicon, en el cual tiene que hacer malabares para instalar el sistema con el que se siente más cómodo.

El creador de Linux reconoce que la instalación de Fedora en un MacBook con Apple Silicon es un proceso que no puede aconsejar a los mortales. Por otro flanco, y a pesar de sus progresos, el trabajo hecho por Asahi todavía arrastra carencias como el hecho de no soportar la dispositivo de punto flotante del Apple M2, por lo que no puede hacer funcionar la precipitación 3D y por ende es incapaz de soportar todos los pertenencias gráficos de GNOME. Sin bloqueo, falta de eso parece importarle a Torvalds, que prefiere tener el escritorio así, con menos pertenencias, al notarlo más ágil, y que adicionalmente aplica eso mismo de forma voluntaria en sus otras máquinas.

El navegador web es otro escollo. A Torvalds le gusta Google Chrome, pero este todavía no tiene compilación para ARM en Linux, así que no le queda otra que contentarse con una compilación de Chromium adecuado para esa obra y portar sus contraseñas, que las portero en su cuenta de Google, a través del móvil.

En resumidas cuentas, Linus Torvalds sigue siendo fiel a Fedora, aunque tenga que complicarse para usuarlo en su MacBook Air con procesador Apple M2. El equipo de Asahi está haciendo poco positivamente impresionante viendo las condiciones y los posibles de los que dispone, pero no se puede descartar que su trabajo sea fusionado en un futuro o que se convierta en una segmento paralela del kernel que acabe siendo aprovechada oficialmente por las distribuciones.


It does not matter if you’re a system administrator or an media user, but keeping your computer infrastructure and network running smoothly is very important. Hence, you need a reliable system monitoring tool that will help you keep track of all the system activities like CPU performance, memory usage, network monitoring, and status of all the connected devices.

There are many choices available on the internet for system monitoring tools. Still, we have crafted a list of the best system monitoring tools for you by testing each tool in different circumstances. So, sit back and enjoy the ride to find the best system monitoring tool for Ubuntu that matches your requirements.

1. htop

htop is a cross-platform system educador, process viewer, and process manager and a reliable alternative to top, which is also a system monitoring tool for Linux and its distros. It is specially designed and developed for consoles and terminals; hence, it supports text mode.

It is a feature-rich system monitoring tool that can be used on Linux, FreeBSD, OpenBSD, and macOS. Talking about the features, it offers information based on various parameters, such as tasks, load media, and uptime. You can change color preferences on its UI to match your requirements.

For Linux and its distros, it provides a delay account matrix and offers support for custom scripts and real-time signals. Since it is open-source and free, it makes it one of the best system monitoring tools for Linux systems.

$ sudo apt-get install htop

2. Glances

Written in Python, Glances is another cross-platform system monitoring tool on our list. It uses a web-based interface to give you maximum system information in the minimum possible space. Depending on terminal size, it automatically adjusts itself and displays all the information in a single window.

It can also be used in client/server mode, and remote system monitoring could be done through the web interface or terminal. You getting all the important information in one place is one of the positives of this tool.

The thing I like most about this system monitoring tool is that you can keep track using its web interface, which allows remote monitoring. Linux running on low-end or older computers might find it tough to run this tool smoothly as it demands higher CPU resources.

Download Here

3. Stacer

Stacer is an open-source system educador and optimization tool, which helps system administrators manage system resources and tasks under one roof. It is a modern tool with an excellent user interface that makes you feel at home even on first use.

It has feature-rich tools that let you manage startup apps, clean unnecessary package caches, crash reports, application logs, application caches, and trash under the system cleaner tab, and start or stop services quickly. Sort processes based on process id (PID), CPU, and memory usage, find a particular process easily using its name in the search bar, and uninstall applications that are not required anymore.

The resource tab displays CPU, RAM, Disk, CPU load media, and network activity for the last 60 seconds. It also comes with an APT repository manager, which you can use to activate, disable or delete any repository. Ubuntu users can use this feature to edit the package repositories.

Sudo add-apt-repository ppa:oguzhaninan/stacer

sudo apt-get update

sudo apt-get install stacer

4. BashTOP

BashTOP is another cool and reliable system monitoring tool for Linux and its distros, such as Ubuntu. It displays the usage stats for processors, memory, disks, network, and other resources.

It is an excellent tool for desktop and computer users who are generally personal users. However, system administrators and server users won’t find this tool useful as their demands will be higher. Also, it is a little bit slower compared to other system monitoring tools, such as Htop.

It is an easy-to-use tool and sports a beautiful user interface with everything placed perfectly.

$ sudo add-apt-repository ppa:bashtop-monitor/bashtop

$ sudo apt update

$ sudo apt install bashtop

5. GNOME System Maestro

It is a simple system monitoring tool that comes pre-installed on various Linux distros running the GNOME desktop environment. This tool shows which programs are running, how much processor time, memory, and disk space are used.

As you can see in the screenshot, it has a clean and simple user interface. Every information and stats are placed perfectly in the user interface, which makes it easy to read and understand.

The CPU history tab shows how much processor capacity is used for each CPU, and the memory and history tabs show how much of your computer’s memory (RAM) is being used. Under the network tab, you see the download and upload speed of the network over the last 60 seconds.

6. vtop

vtop is a free and open-source system monitoring tool for Ubuntu and other Linux distros. Using vtop, not only can you educador system resources, but also you can manage them.

It is a command-line tool written in node.js. Hence, you must first install node.js and npm packages before installing vtop. Using this tool, you can easily educador CPU usage and memory usage, something you can do in other command-line tools like top.

$ sudo apt-get install node.js

$ sudo apt-get install npm

$ sudo npm install -g vtop

7. nmon

nmon is a simple-to-use system monitoring tool for Linux and its distros, such as Ubuntu. It gives you a quick overview of what’s going on with your server.

This monitoring tool displays the usage stats of CPU, memory, network, disks, file system, NFS, top processes, and resources. The best thing is you can select what the nmon displays, and what you have to do is simply press specific keys to toggle stats.

$ sudo apt-get install nomn

8. atop

atop is an advanced interactive system and process educador that displays the load on the Linux system. It shows the stats of the most critical hardware resources, such as CPU, memory, disk, and network.

You can log resource utilization permanently if you want it for long-term analysis.

$ sudo apt-get install atop

9. gotop

gotop is another command-line graphical system monitoring tool for Ubuntu and other Linux distros. Along with Linux, gotop is also available for macOS.

It is inspired by vtop and gtop. But unlike them, it does not use node.js. Instead, it is written in Go. You can educador CPU usage, disk usage, CPU temperature, memory usage, network usage, and process table.

$ sudo snap install gotop-brlin

Conclusion

These are the best system monitoring tools you can use on your computers running Linux and its distros. Some other tools are available for Ubuntu, but the ones listed above are tested and presented to you.



Source link

Sometimes, If you mix different integer types in an expression, you might end up with tricky cases. For example, comparing long with size_t might give different results than long with unsigned short. C++20 brings some help, and there’s no need to learn all the complex rules :)

Conversion and Ranks

 

Let’s have a look at two comparisons:

#include <iostream>

int main() {
    long a = -100;
    unsigned short b = 100;
    std::cout << (a < b);   // 1
    size_t c = 100;
    std::cout << (a < c);   // 2
}   

If you run the code @Compiler Explorer (GCC 12, x86-64, default flags) you’ll see:

Why? Why not 11?

(By the way, I asked that question on Twitter, see https://twitter.com/fenbf/status/1568566458333990914 – thank you for all the answers and hints)

If we run C++Insights, we’ll see the following transformation:

long a = static_cast<long>(-100);
unsigned short b = 100;
std::cout.operator<<((a < static_cast<long>(b)));
size_t c = 100;
std::cout.operator<<((static_cast<unsigned long>(a) < c));

As you can see, in the first case, the compiler converted unsigned short to long, and then comparing -100 to 100 made sense. But in the second case, long was promoted to unsigned long and thus -100 become (-100) % std::numeric_limits<size_t>::max() which is some super large positive number.

In caudillo, if you have a binary operation, the compiler needs to have the same types; if the types differ, the compiler must perform some conversion. See the notes from C++ Reference –

For the binary operators (except shifts), if the promoted operands have different types, additional set of implicit conversions is applied, known as usual arithmetic conversions with the goal to produce the common type (also accessible via the std::common_type type trait)…

As for integral types:

  • If both operands are signed or both are unsigned, the operand with lesser conversion rank is converted to the operand with the greater integer conversion rank.
  • Otherwise, if the unsigned operand’s conversion rank is greater or equal to the conversion rank of the signed operand, the signed operand is converted to the unsigned operand’s type.
  • Otherwise, if the signed operand’s type can represent all values of the unsigned operand, the unsigned operand is converted to the signed operand’s type.
  • Otherwise, both operands are converted to the unsigned counterpart of the signed operand’s type.

And the conversion rank:

The conversion rank above increases in order bool, signed char, short, int, long, long long (since C++11). The rank of any unsigned type is equal to the rank of the corresponding signed type. The rank of char is equal to the rank of signed char e unsigned char. The ranks of char8_t, (since C++20) char16_t, char32_t, and (since C++11) wchar_t are equal to the ranks of their corresponding underlying types.

For our use case, the rank of unsigned short is smaller than long and thus it was promoted to long. While in the second case, the rank of size_t, which can be unsigned long is larger or equal to the rank of long, so we have promotion to unsigned long.

If you compare signed with unsigned, make sure the signed value is positive to avoid unexpected conversions.

Use Cases

 

In caudillo, we should aim to use the same integral types to avoid various conversion warnings and bugs. For example, the following code:

std::vector numbers {42, 76, 2, 21, 98, 100 };
for (int i = 0; i < numbers.size(); ++i)
        std::cout << i << "(" << numbers[i] << "), ";

It will generate a GCC warning in -Wall. However, it can be easily fixed by using unsigned int or size_t as the type for the loop counter.

What’s more, such code might also be improved by various C++ features, for example:

std::vector numbers {42, 76, 2, 21, 98, 100 };
for (int i = 0; utilitario &num : numbers)
    std::cout << "i: " << i++ << " - " << num << 'n';

The above example uses a range-based-for loop with an initializer (C++20). That way, there’s no need to compare the counter against the container size.

On the other hand, there are situations where you get integral numbers of different types:

long id = -1;
if (id >= 0 && id < container.size()) {

}

In the above sample, I used id, which can have some negative value (to indicate some other properties), and when it’s valid (in range), I can access elements of some container.

In this case, I don’t want to change the type of the id object, so I have to put static_cast<size_t>(id) to avoid warnings.

Putting casts here and there might not be the best idea, not to mention the code style.

Additionally, we should also follow the C++ Core Guideline Rule:

ES.100: Don’t mix signed and unsigned arithmetic:

Reason Avoid wrong results.

Fortunately, in C++20, we have a utility to handle such situations.

It’s called “Safe Integral Comparisons” – P0586 by Federico Kircheis.

Safe integral comparisons functions

 

In the Standard Library we’ll have the following new functions that compare with the “mathematical” meaning:

// <utility> header:
template <class T, class U>
constexpr bool cmp_equal (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_not_equal (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_less (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_greater (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_less_equal (T t , U u) noexcept
template <class T, class U>
constexpr bool cmp_greater_equal (T t , U u) noexcept
template <class R, class T>
constexpr bool in_range (T t) noexcept

T e U are required to be standard integer types and so those functions cannot be used to compare std::byte, char, char8_t, char16_t, char32_t, wchar_t e bool.

You can find those functions in the <utility> header file.

This article started as a preview for Patrons months ago.
If you want to get exlusive content, early previews, bonus materials and access to Discord server, join the C++ Stories Premium membership.

Examples

 

We can rewrite our initial example into:

#include <iostream>
#include <utility>

int main() {
    long a = -100;
    unsigned short b = 100;
    std::cout << std::cmp_less(a, b);
    size_t c = 100;
    std::cout << std::cmp_less(a, c);
}   

See the code at @Compiler Explorer

And here’s another snippet:

#include <cstdint>
#include <iostream>
#include <utility>
 
int main() {
    std::cout << std::boolalpha;
    std::cout << 256 << "tin uint8_t:t" << std::in_range<uint8_t>(256) << 'n';
    std::cout << 256 << "tin long:t" << std::in_range<long>(256) << 'n';
    std::cout << -1 << "tin uint8_t:t" << std::in_range<unsigned>(-1) << 'n';
}

Run @Compiler Explorer

Positivo code

 

I also looked at some open-source code using codesearch.isocpp.org. I searched for static_cast<int> to see some loops patterns or conditions. Some interesting things?

// actcd19/main/c/chromium/chromium_72.0.3626.121-1/chrome/browser/media/webrtc/window_icon_util_x11.cc:49:

int start = 0;
int i = 0;
while (i + 1 < static_cast<int>(size)) {
    if ((i == 0 || static_cast<int>(data[i] * data[i + 1]) > width * height) &&
        (i + 1 + data[i] * data[i + 1] < static_cast<int>(size))) {

size is probably unsigned, so they always have to convert it and compare it against int.

And searching for static_cast<size_t> shows: codesearch.isocpp.org

// actcd19/main/c/chromium/chromium_72.0.3626.121-
// 1/third_party/libwebm/source/common/vp9_level_stats_tests.cc:92:

for (int i = 0; i < frame_count; ++i) {
    const mkvparser::Block::Frame& frame = block->GetFrame(i);
    if (static_cast<size_t>(frame.len) > data.size()) {
        data.resize(frame.len);
        data_len = static_cast<size_t>(frame.len);
        // ...

This time frame.len has to be converted to size_t to allow safe comparisons.

Implementation Notes

 

Since MSVC is on Github, you can quickly see how the feature was developed, see this pull request and even see the code in STL/utility at master · Microsoft/STL.

Here’s the code for cmp_equal():

template <class _Ty1, class _Ty2>
_NODISCARD constexpr bool cmp_equal(const _Ty1 _Left, const _Ty2 _Right) noexcept {
  static_assert(_Is_standard_integer<_Ty1> && _Is_standard_integer<_Ty2>,
   "The integer comparison functions only "
   "accept standard and extended integer types.");
  if constexpr (is_signed_v<_Ty1> == is_signed_v<_Ty2>) {
    return _Left == _Right;
  } else if constexpr (is_signed_v<_Ty2>) {
    return _Left == static_cast<make_unsigned_t<_Ty2>>(_Right) && _Right >= 0;
  } else {
    return static_cast<make_unsigned_t<_Ty1>>(_Left) == _Right && _Left >= 0;
  }
}

And a similar code for cmp_less():

template <class _Ty1, class _Ty2>
_NODISCARD constexpr bool cmp_less(const _Ty1 _Left, const _Ty2 _Right) noexcept {
    static_assert(_Is_standard_integer<_Ty1> && _Is_standard_integer<_Ty2>, "same...");
    if constexpr (is_signed_v<_Ty1> == is_signed_v<_Ty2>) {
        return _Left < _Right;
    } else if constexpr (is_signed_v<_Ty2>) {
        return _Right > 0 && _Left < static_cast<make_unsigned_t<_Ty2>>(_Right);
    } else  static_cast<make_unsigned_t<_Ty1>>(_Left) < _Right;
    
}

Notes:

  • the std:: namespace is omitted here, sois_signed_v is a standard type trait, std::is_signed_v, same as make_unsigned_t is std::make_unsigned_t.
  • Notice the excellent and expressive use of if constexpr; it makes metaprogramming code very easy to read.

The code fragments present cmp_equal() e cmp_less(). In both cases, the main idea is to work with the same sign. There are three cases to cover:

  • If both types have the same sign, then we can compare them directly
  • But when the sign differs (two remaining cases), then the code uses make_unisgned_t to convert the _Right or _Left part and ensure that the value is not smaller than 0.

Help from the compiler

 

When I asked the question on Twitter, I also got a helpful answer:

My example used only default GCC settings, but it’s best to turn on handy compiler warnings and avoid such conversion bugs at compile time.

Just adding -Wall generates the following warning:

<source>:8:21: warning: comparison of integer expressions of different signedness: 'long int' e 'size_t' {aka 'long unsigned int'} [-Wsign-compare]
    8 |     std::cout << (a < c);
      |                   ~~^~~

See at Compiler Explorer

You can also compile with -Werror -Wall -Wextra, and then the compiler won’t let you run the code with signed to unsigned conversions.

Compiler Support

 

As of September 2022, the feature is implemented in GCC 10.0, Clang 13.0, and MSVC 16.7.

Summary

 

This post discussed some fundamental issues with integer promotions and comparisons. In short, if you have a binary arithmetic operation, the compiler must have the same types for operands. Thanks to promotion rules, some types might be converted from signed to unsigned and thus yield problematic results. C++20 offers a new set of comparison functions cmp_**, ensuring the sign is correctly handled.

If you want to read more about integer conversions, look at this excellent blog post: The Usual Arithmetic Confusions by Shafik Yaghmour.
And also this one Summary of C/C++ integer rules by Nayuki.

Back to you

  • What’s your approach for working with different integer types?
  • How do you avoid conversion errors?

Share your feedback in the comments below.

Todavía no ha empezado a refrescar, pero un PING se puede tomar a cualquier temperatura, así que vamos con el primero de septiembre, en el que encontraréis cosas como…

  • Nitrux 2.4. Nueva interpretación de mantenimiento de Nitrux 2 esa distribución ‘kdeera’ particular donde las haya… Y, ojo, que por interpretación de mantenimiento se entienden novedades, incluso. Más información, en el anuncio oficial.
  • Armbian 22.08. Más allá del Linux para PC está el de Armbian, la variación de Debian para dispositivos ARM tipo Raspberry Pi y similares, que llega ampliando el soporte con su nueva interpretación. Más información, en el anuncio oficial.
  • OpenMandriva Lx 5.0 RC. En el campo de las versiones en avance la más llamativa que se ha anunciado recientemente es la de la previa de la próxima de OpenMandriva, de la que se acaba de presentar una «Silver Candidate». Ahí es nadie. Para más datos, la nota oficial.
  • elementary OS 6.1. Hablando ya de distribuciones en curso, Danielle Foré ha anunciado la remesa de actualizaciones que recibió en agosto elementary OS 6.1, al tiempo que aprovecha la vez para seguir adelantando cosillas de la próximo elementary OS 7. En el blog oficial del proyecto lo tenéis todo.
  • Linux Mint. Poco parecido se puede proponer de Linux Mint, cuyo boletín mensual fue publicado hace unos días, adelantando de igual modo algunas de las novedades que están por llegarle a la distribucioón mentolada. Si te interesa, ya sabes, pincha en el enlace.
  • digiKam 7.8. Y más versiones de mantenimiento, en este caso de una aplicación tan popular en su categoría como lo es digiKam, el administrador de fotografía del tesina KDE. En compendio, una aggiornamento acumulativa a lo ya pasado con digiKam 7.0 y digiKam 7.5. Para la 8 si eso volvemos a poner en cobro la informe como tal. Mientras tanto, al blog oficial.
  • DuckDuckGo Email Protection. Una noticia de MC que se me olvidó compartir por estos lares es la del emanación de este servicio, cuando menos estrafalario, que quizás os interesa conocer. No, no es un Firefox Relay de DuckDuckGo: va más allá de eso (contadme qué tal si lo habéis probado).
  • Timonel de resurrección. ¿Quién necesita una mentor para resucitar su antiguo PC? De los habituales es probable que casi nadie, pero como por aquí recala mucho otro tipo de personas, nos hacemos eco de una completa mentor que incluye hardware y software que han publicado, de nuevo, nuestros compañeros de MC: Cómo devolver a la vida un PC viejo que tengas arrinconado en casa.

La entrada PING: Nitrux, Armbian, OpenMandriva, digiKam, DuckDuckGo… es diferente de MuyLinux


Hosted by Kara Swisher and Professor Scott Galloway

Every Tuesday and Friday, tech journalist Kara Swisher and NYU Professor Scott Galloway offer sharp, unfiltered insights into the biggest stories in tech, business, and politics. They make bold predictions, pick winners and losers, and bicker and banter like no one else. After all, with great power comes great scrutiny. From New York Magazine and the Vox Media Podcast Network.

As a rule, I don’t listen to tech podcasts much at all, since I write about tech almost all day. I check out podcasts about theater or culture — about as far away from my day job as I can get. However, I follow a ‘man-about-town’ guy named George Hahn on social media, who’s a lot of fun. Last year, he mentioned he’d be a guest host of the ‘Pivot’ podcast with Kara Swisher and Scott Galloway, so I checked out Pivot. It’s about tech but it’s also about culture, politics, business, you name it. So that’s become the podcast I dip into when I want to hear a bit about tech, but in a cocktail-party/talk show kind of way.” – Christine Kent, Communications Strategist, Christine Kent Communications



Source link