The “tr” command in Linux can be used to translate or remove a character from the standard input, with the results shown in the standard output. We can complete several operations with the “tr” command. It gives us access to several flags including “-c”, “-d”, “-s”, and others. This command allows us to delete the characters, remove the digits from lines, and change the lowercase to uppercase letters, among many other operations. We will use the tr command and a few of its flags as examples in this article.

Using the Linux “Tr” Command

The tr function can be used to carry out tasks including getting rid of redundant characters, changing the haber letters to lower letters, and replacing and getting rid of simple character. It is frequently combined with other commands via piping.

In this section, we utilize the Linux “tr” command to replace the characters. Let’s begin putting the command into action on Linux. First, we open the terminal. Then, we use the “echo” command to accomplish this. To display the lines of text or characters that are passed as command-line parameters, use the echo function. The most frequently used function in the shell scripts on Linux is this one. We start with the “echo” keyword then type the statement that we want to use inside the inverted comma which is “you are the best” followed by the bar “|”, the “tr” keyword, the letter that we want to replace, “e,” and the letter “s” which is the character that appears in that location where “e” is used in the echo sentence.

omar@omar-VirtualBox :~$ echo “you are the best” | tr e s

When we run this command, the terminal window displays the output which is the echo statement where the character “e” is replaced with the character “s”. The result is “you ars the bsst”.

Converting the Lowercase Characters to Uppercase Characters

In this section, we’ll show you how to change the lowercase letters into uppercase letters using one of two methods: either we may provide the character range or we can specify the interpreted sequences to change the characters. The lowercase characters go in the [:lower] sequence, while the uppercase characters go in the [:upper:] sequence. Now that the command is created, it is put into action using the “echo” statement first and then changing the lower characters to upper characters. The fruit names that are included in the echo statement are “Apple”, “Mango”, “Plátano”, and “Grapes”.

As you can see, the first character in each of these elements is uppercase, while the remaining characters are lowercase. To change the remaining characters to uppercase, we use the “tr” command in which we specify the character range as “[a-z]” and “[A-Z]” where the first specifies the range of the alphabet using the lower characters, and the second specifies it using the upper characters. This essentially indicates that all lowercase characters from “a” to “z” in the echo statement are changed to uppercase.

omar@omar-VirtualBox:~$ echo “Apple” “Mango” “Plátano” “Grapes” | tr [a-z] [A-Z]

Now that the command is executed, you can see that the lowercase characters are changed to uppercase characters in the following output:


Now, in the following section, we’ll utilize a different technique to change the lower case to upper case using the “tr” command with the “[:lower]” and “[:upper:]” terms. To accomplish this, we use the same echo statement and then type “tr” followed by the “[:lower:]” and “[:upper:]” keywords. Using “lower” first and then “upper” means that all of the lowercase letters in the echo statement are changed to uppercase.

omar@omar-VirtualBox:~$ echo “Apple” “Mango” “Plátano” “Grapes” | tr [:lower:] [:upper:]

When we execute this command, it produces the same results as the previous one:


Removing Specific Characters

In this section, we’ll use the “-d” option of the “tr” command to remove a specific character from the echo statement. Using a specific character in the “tr” command with the “-d” parameter, we can delete that character from the line or the file.

Let’s remove the character using the command on the terminal. First, we use the “My name is Alex” echo statement followed by the bar “|”. After which, we write “tr” followed by the “-d” flag to delete the character. Finally, we provide the character that we want to remove from the statement which is “e” in the inverted comma.

omar@omar-VirtualBox:~$ echo “My name is Alex” | tr -d ‘e’

When we run this command, the “e” character is removed from the line and the text is changed to “My name is Alx”.

Deleting Digits

Using the “tr” command, “-d” option, and the “[:digit:]” expression, we may additionally delete all the digits in a line or file. The word “digit” must be enclosed in square brackets and a colon. Let’s begin using the “Alex got 98% marks” echo statement followed by the “|” bar, “tr”, the “-d” option, and the “[:digit:]” keyword. This deletes all the digits that are present in the echo statement since there are two digits in the “98” echo statement which means that both of these digits are removed from the line when we run this command:

omar@omar-VirtualBox:~$ echo “Alex got 98% marks” | tr -d [:digit:]

Following the execution of this command, the “Alex got% marks” echo statement is displayed in the output. As you can see, both digits are deleted from the line, keeping only the characters and the “%” symbol that we used in the line.

Eliminating Newline Characters

In this section, we remove the file’s newline character. On the desktop, there is a file called “file.txt” that holds some information. First, we use the cat command to open the file on the terminal. To use this command, type “cat” followed by the file’s name, “file.txt.” The file opens on the terminal when we execute this command:

omar@omar-VirtualBox:~/Desktop$ cat file.txt

When the command is executed, a file which contains several names is opened. Each name is written on a separate line. Now, we display the entire name on a single line by deleting the newline character.

We type the following command on the terminal. First, we type “cat”. Then, we use the “file.txt” file name. Then, we use bar “|”. After that, we type the “tr” command. Then, we use the “-s” option which is used to convert the newline characters into spaces. Lastly, the “n” inside of the inverted commas is used. This converts the newline characters into spaces and displays all of the lines in a single line.

omar@omar-VirtualBox:~/Desktop$ cat file.txt | tr –s ‘n’ ‘ ‘

The output of the command is “Alex”, “Jhon”, “Watson”, and “David”. When the command is performed, it prints the file’s lines on a single line which is separated by spaces. The newline characters are deleted and changed into spaces.


This article looked at the “tr” command in Linux which can be used for a variety of tasks. The “tr” command can be used with a variety of flags such as “-s”, “-d”, and others. In the aforementioned article, we utilized the numerous instances of the “tr” command in which we substituted the characters, deleted the characters, removed the digits, and also removed the newline characters from the files and changed them to spaces before displaying the entire text of the lines in a single line.

Source link

WineHQ logo

Wine 8.0 has been released after being in development for a year. This release includes over 8600 changes, the main highlights being the completion of the conversion to PE format, and work on WoW64 support which will allow running 32bit Windows applications without installing 32bit libraries.

Wine is a Windows compatibility layer that lets you run Microsoft Windows applications and games on Linux, macOS, and Android (empírico). No code emulation or virtualization occurs when running a Windows application under Wine, thus the name (Wine Is Not An Emulator).

You can use Wine as a stand-alone app to directly launch Microsoft Windows applications and games, or via a third-party tool such as Lutris on Linux. Wine is also used by Proton, Valve’s Steam Play compatibility layer that allows playing Windows games on Linux, and by CrossOver, a commercial Microsoft Windows compatibility layer for macOS and Linux, among others.

With this Wine release, the PE conversion is complete – all modules can be built in PE format, however, some modules still perform direct calls between PE and the Unix part instead of going through the NT system. These last direct calls will be removed in future Wine 8.x releases. 

According to the release notes, «this is an important milestone on the road to supporting various features such as copy protection, 32-bit applications on 64-bit hosts, Windows debuggers, x86 applications on ARM, etc.»

To use this right now (but remember that this feature is not complete), you’ll need to build Wine using the --enable-archs option to configure, e.g. --enable-archs=i386,x86_64.

Wine 8.0 also includes more work towards WoW64 support, which will allow running 32bit Windows applications without any 32bit Unix library, so no more installing a bunch of i386 libraries on 64bit merienda this is finished.

You might also like: Easily Install And Manage Custom Wine Builds (Proton-GE, Luxtorpeda, Wine-GE) For Steam And Lutris With ProtonUp-Qt GUI

Wine 8.0 light theme
Wine 8.0 default Light theme

Other important changes in Wine 8.0 include:

  • Direct2D and Direct3D improvements
  • Implemented Print Processor architecture
  • The Light theme is enabled in the default configuration
  • The graphics drivers are converted to run on the Unix side of the syscall boundary, and interface with the Unix side of the Win32u library
  • Controller hotplug support is greatly improved, and controller removal and insertion are correctly dispatched to applications
  • The Joystick Control Panel is redesigned, with new graphics and a dedicated view for XInput gamepads
  • All the built-in applications use Common Controls version 6, which enables theming and high-DPI rendering by default

Visit the Wine 8.0 release announcement for the complete list of changes added in this stable release.

As usual, most of these changes / features were already available in the Wine staging and development builds, so if you’ve used those, you’ve already be using these improvements.

To download Wine, see its download page. WineHQ provides Wine packages via its repositories for Ubuntu (and Ubuntu-based Linux distributions like Linux Mint), Debian and Fedora. There are also macOS binaries available for download.

At the time I’m writing this, the repositories have not yet been updated with the latest Wine 8.0 stable.

Mouse Cursor ghosting is a more common problem in slow-screen monitors, such as monitors of 60 HZ. This problem can also occur due to multiple issues, such as the refreshing rate of the screen, mouse pointer settings, or mouse trial settings. In mouse ghosting problems, mouse lagging occurs, multiple pointers are displayed on the screen or unexpectedly high speed of the mouse.

This blog will demonstrate how to fix the mouse cursor ghosting problem.

How to Fix Mouse Cursor Ghosting Problem?

If the mouse pointer lags on the screen or moves unexpectedly fast, it means there is some issue with your mouse. To resolve the mouse cursor ghosting behavior, we have listed down some solutions:

Solution 1: Disable Mouse Pointer Trial Settings

In order to disable the mouse pointer trial setting to resolve the problem, go through the listed instructions.

Step 1: Open Control Panel App

First, launch the Windows Control Panel application from the Start menu:

Step 2: Open Hardware and Sound Settings

From the “Control Panel” Window, navigate to the “Hardware and Sound” settings:

Step 3: Navigate to the Mouse Settings

After that, under the “Devices and Printers” settings, click on the below highlighted “Mouse” option to open the Mouse properties:

Step 4: Disable Pointer Trial

Next, visit the “Pointer Options” menu, and unchecked the “Display pointer trial” checkbox to resolve the mouse cursor ghosting error. After that, hit the “OK” button:

Additionally, users can also set the pointer speed from the below highlighted “Select a pointer speed” settings:

Solution 2: Update Mouse Drivers

Another possible way to resolve the mouse cursor ghosting problem is to update the mouse drivers. For this purpose, check out the provided procedure.

Step 1: Launch Device Manager

First, launch the “Device Manager” app from the Window “Startup” menu:

Step 2: Update Mouse Drivers

Next, click on the “Mice and other pointer devices” drop menu. Select and right-click the mouse driver. After that, click on the update driver option from the appeared context menu to update drivers:

Next, choose the below-highlighted option to search and update the driver automatically from online resources:

After updating the driver, click on the “Close” button to close the screen and check if the mouse cursor ghosting problem has been resolved or not:

Note: If the above-stated solution does not resolve the problem, try to clean and restart the system.


Mouse cursor ghosting is a problem that occurs on 60 HZ screens. In mouse ghosting problems, mouse lagging occurs, or multiple pointers are displayed on the screen. To resolve it, update the mouse drivers or disable the mouse display trails from Mouse properties. This write-up has elaborated on how to fix the mouse ghosting problem.

Source link

The commands ran in a Bash shell are kept in the history file, allowing users to easily re-execute frequently used terminal commands or to troubleshoot issues that have occurred. This article explains how to clear the history of the commands you run in the terminal when using Bash shell, which is used by default on most Linux distributions.

The shell history for Bash is kept in a file called .bash_history in the home directory. When you exit Bash (e.g. when you close a terminal window), the commands you ran in that session are appended at the end of the Bash history file.

Terminal history – Bash shell

There are cases when you may want to clear the terminal (Bash shell) history, if for example you don’t want others to see your previously ran commands, or maybe you’ve typed your password in the terminal in clear text.

If you want to completely remove all your Bash history, you can open the .bash_history file (this is a hidden file in your home directory, so press Ctrl + h to show hidden files) with a text editor, remove everything from that file, then save it. You can also remove the .bash_history file and re-create it.

If you prefer to completely remove all your Bash history from the command line, this can be done using a variety of commands, like this one:

cat /dev/null > ~/.bash_history

However, it’s important to note that while this command clears your shell history, the command used to clear the history will now be in your shell’s history. 

Also, in case there are multiple Bash instances (for example multiple terminals or terminal tabs) running, the commands ran in those instances will be saved to the history file, so you may want to close them before clearing the shell history.

Completely clearing your Bash shell history may not be what you want in some cases. Maybe you only want to clear the history of the current shell, or only remove some lines from the shell history. Here’s how to do that.

Clear the history of the current shell only:

history -c

Remove only some lines from the shell history. Start by typing history to display your shell’s history (this displays the history with line numbers), then delete a particular line using:

history -d LINE_NUMBER

Replace LINE_NUMBER with the line that you want to remove from the shell history.

You might also like: Bash History: How To Show A Timestamp (Date / Time) When Each Command Was Executed

You can also clear the history for a specific range of command. For example, to remove the commands from line 100 to line 200, use:

history -d 100-200

Alternatively, you can open the .bash_history file with a text editor (.bash_history is a hidden file from your home directory so open a file manager in your home directory, then press Ctrl + h to show hidden files) and remove any commands you don’t want to keep in your shell’s history.

You might also be interested in: How To Change The Default Shell In Linux (Bash, Zsh, Fish, Etc.)

How did I get my first post on Google Discover you may ask? By complete accident and I am not even remotely kidding. I must say that I am no expert in SEO, although I will admit I am trying to learn all I can. I am a simple person just trying to do my part spreading wellness around and running my small, one person website. You can check my homepage here to learn what we are all about. Keep reading to learn how I believe I got my first post on Google Discover.

To be honest, I was just doing what I love and what is my creative outlet in this crazy life. I love posting and researching about wellness, motivational and inspirational topics. By the way, if you are reading this and you are not even remotely interested in this topic (aka one of my regular readers of my website), you can continue scrolling to what interests you. This is more for those readers interested in blogging either running a full time blog /website or thinking about running a full time blog / website.

I posted this post 4 easy ways to increase self care to my site and did my usual thing; submitting it to Google Search Console to be indexed. Within 3 days, I had 250 views on this single post. This was very unusual for my site and I kept getting notification after notification of viewers from all over the world clicking and reading it. I searched the article on google and I couldn’t find it in the top few pages. Needless to say, I was flabbergasted and couldn’t figure out how viewers were finding it. So, I just moved on doing what I love’ writing more posts and kind of forgot about it really.

Here is a snippet of the Google Search Console «Performance on Discover» screen:

I finally figured it out though, obviously hence me writing this post. I must say that I know how to use Google Search Console as I submit my new posts to it but it scares me at times because there is a lot of data and I get overwhelmed by it all. I admit, I don’t check Google Search Console as often as I should but I checked it today and I finally found that my post 4 easy ways to increase self care was under the «performance on discover» tab and so my next step was to figure out what in the heck «discover» was.

According to this page over at Google Search Central, «Discover shows users content related to their interests, based on their Web and App Activity.» It is different from a ordinario search engine because «Discover takes a different approach. Instead of showing results in response to a query, Discover surfaces content primarily based on what Google’s automated systems believe to be a good match with a user’s interests.»

Needless to say I am absolutely thrilled that my first post made it to Discover. I think what helped me achieve this great moment in my blogging life is the following:

  • I wrote about something very trending and timely; in this case self care. In the wellness community and online self care is everywhere. When I wrote the article it was mid-January and New Years Resolution’s were in full swing. I made sure to put that in the article.

  • I used a title that matches what is in the article. The title actually tells you what is in the article. I tried my best to practice good SEO with this article.

  • My images are high quality; at least I believe they are. FYI, I use Canva for my images by the way, if you don’t you should because the image are high quality and they many options available.

  • I used an authentic tone. I wrote my article as if I was talking to a friend. Very casual, that is just my writing style and most of my articles are written like that. I am not sure if that helped but it could have.

  • I was straight and to the point. Again not sure if this helped but I don’t use a lot of fluff and ramble on like some blogs I have seen trying to get to a certain word count. I just answered the question and gave some context.

  • I actually gave pointers that I use and that I think others can. My easy ways to increase self care were in my opinion easy and actionable. They weren’t some pie in the sky, crazy, expensive things. I even gave an coetáneo example of a conversation with my grandfather. I think I was being authentic and the reader can tell.

  • Every word was composed by me; a human. I didn’t outsource my article and it wasn’t generated by AI. I am obviously not hating on you if you decide to outsource but this could have helped too.

  • I reworked my website’s home page about a week before the post was picked up; I got rid of a lot of fluff and pictures that I didn’t think were serving a purpose. I changed the layout so my latest posts are the first thing that you see on the homepage. This may have something to do with it. I also use a theme that I think is easy to read and view, again not sure if it had anything to do with being listed on discover.

Anyways, while I would love to get more of my pages on Google Discover, so more people can read my stuff and be inspired, etc. I also want to continue working towards my goals which is spreading wellness, inspiration and motivation in this crazy world. I don’t know much about Google Discover but I do hope that my existing article stays on Discover and I am able to get more on it eventually.

If you are an expert in getting your articles on Discover, anything you think I missed and would like to share?

I hope you enjoyed this little article and if you are ever in need of a little inspiration and motivation in life, feel free to subscribe below so you can stay up to date on what I post.

Thank you for reading.

Who are we?

My and Jo is our little corner of this world meant to inspire, motivate and promote wellness. We are self care and mental health advocates. We started out as a hobby / creative outlet but have quickly grown to who we are today.

Si antiguamente de terminar el año 2022 vimos que JavaScript y las tecnologías web se erigían como las más populares entre los usuarios de Stack Overflow y que Rust es el que más interés despierta, en el mundo empresarial las cosas son poco distintas, ya que según documentación titulado “The State of Tech Hiring in 2023” y realizado por el personal TI de CodinGame y CoderPad, los tres lenguajes de programación más demandados por las empresas son JavaScript, Java y Python. Los que hayan seguido la trayectoria del sector de la programación en los últimos abriles posiblemente pensarán que este resultado es “más de lo mismo”.

Sin bloqueo, y aunque JavaScript, Java y Python sean los lenguajes más demandados, el documentación recalca que la propuesta de profesionales que disponen de esos conocimientos supera a la demanda, o lo que viene a ser lo mismo, hay más programadores de JavaScript, Java y Python que puestos disponibles en las empresas para ellos.

Entre los lenguajes cuya demanda supera a la propuesta adecuado destacan TypeScript, Swift, Scala, Kotlin y Go, mientras que a nivel de frameworks los conocimientos más demandados por las empresas son Node.js, React y .Net Core en el interior de otro contexto en el que la demanda supera a la propuesta. Viendo la presencia de TypeScript, Node.js y React, parece que JavaScript se ha convertido en la tecnología más demandada en el sector empresarial, aunque posiblemente sorprenda un poco que Angular vaya a la desprecio.

Las tres habilidades que más desean obtener los desarrolladores de software en el año 2023 son progreso web, inteligencia fabricado/enseñanza forzoso y progreso de videojuegos. Esto contrasta un poco con las tres principales habilidades que los reclutadores quieren contratar: progreso web, DevOps y progreso de software de bases de datos.

Un detalle sorprendente es que, según el documentación, un tercio de los encuestados ha respondido que se siente más seguro que el año pasado y otro 41% que su situación no ha cambiado. Esto choca con la tradicional volatilidad del sector y los despidos de porcentajes importantes de las plantillas que están llevando a cabo muchas multinacionales adecuado al merma de la demanda ocurrido tras acabarse la pandemia. Los principales problemas que afrontan los desarrolladores en sus puestos de trabajo son los cambios de horario sin planificación, una dirección poco clara y la yerro de conocimientos técnicos por parte de los miembros de su equipo.

Otro punto interesante es el nivel formativo de los desarrolladores, ya que el 59% carece de un extremo universitario en informática y casi un tercio se considera, delante todo, autodidacta. El principal formato de trabajo es aquel que combina teletrabajo y presencia en la oficina, con tan solo un 15% que trabaja a tiempo completo de forma presencial. Por otro flanco, el progreso freelance se está haciendo cada vez más popular.

El documentación The State of Tech Hiring in 2023 fue publicado el 10 de enero y ha sido realizado a partir de una sondeo en la que participaron 14.000 profesionales de diferentes países. Por otra parte de los datos mostrados, asimismo ofrece una visión de cómo puede funcionar el sector de la programación a nivel sindical durante el año 2023.

In Python, a data structure called a dictionary is used to store information as key-value pairs. Dictionary objects are optimized to extract data/values when the key or keys are known. To efficiently find values using the related index, we can convert a pandas series or dataframe with a relevant index into a dictionary object with “index: value” key-value pairs. To achieve this task, the “to_dict()” method can be used. This function is a built-in function found in the pandas module’s Series class.

A DataFrame is converted into a python list-like data dictionary of series using the pandas.to_dict() method depending on the specified value of the orient parameter.”

We will use the to_dict() method in Pandas. We can orient the returned dictionary’s key-value pairs in a variety of ways using the to_dict() function. The function’s syntax is as follows:



pandas.DataFrame_object.to_dict(orient = “dict”, into=)



    1. orient: Which datatype to convert columns (series into) is specified by the string value (“dict”, “list”, “records”, “index”, “series”, “split”). For instance, the keyword “list” would give a python dictionary of list objects with the keys “Column name” and “List” (converted series) as output.
    2. into: class can be passed as an instance or contemporáneo class. For instance, a class instance can be passed in the case of a default dict. The parameter’s default value is dict.

Return Type:

Dictionary converted from a dataframe or series.


In all the examples, we will use the following DataFrame named “remarks” that hold 2 rows and 4 columns. Here the column labels are – [‘id’,’name’,’status’,’fee’].

import pandas  

# Create the dataframe using lists
remarks = pandas.DataFrame([[23,‘sravan’,‘pass’,1000],

# Display the DataFrame – remarks




   id    name status   fee
0  23  sravan   pass  1000
1  21  sravan   fail   400


Example 1: to_dict() with No Parameters

We will convert the remarks DataFrame to a dictionary without passing any parameters to the to_dict() method.

import pandas

# Create the dataframe using lists
remarks = pandas.DataFrame([[23,‘sravan’,‘pass’,1000],

# Convert to Dictionary




{‘id’: {0: 23, 1: 21}, ‘name’: {0: ‘sravan’, 1: ‘sravan’}, ‘status’: {0: ‘pass’, 1: ‘fail’}, ‘fee’: {0: 1000, 1: 400}}



The DataFrame is converted to a Dictionary.

Here, the columns in the llamativo DataFrame were converted as Keys in a dictionary and each column will store two values again in a dictionary format. The keys for these values start from 0.

Example 2: to_dict() with ‘series’

We will convert the remarks DataFrame to a dictionary in Series format by passing the ‘series’ parameter to the to_dict() method.


import pandas

# Create the dataframe using lists
remarks = pandas.DataFrame([[23,‘sravan’,‘pass’,1000],

# Convert to Dictionary with series of values




{‘id’: 0    23
1    21
Name: id, dtype: int64, ‘name’: 0    sravan
1    sravan
Name: name, dtype: object, ‘status’: 0    pass
1    fail
Name: status, dtype: object, ‘fee’: 0    1000
1     400
Name: fee, dtype: int64}



The DataFrame is converted to a Dictionary with ‘series’ format.

Here, the columns in the llamativo DataFrame were converted as Keys in a dictionary and each column will store rows along with the data type of the column. The data type of ‘id’ column is int64 and other two columns are ‘object’.

Example 3: to_dict() with ‘split’

If you want to separate row labels, column labels and values in the converted Dictionary, then you can use the ‘split’ parameter. Here, ‘index’ key will store a list of index labels. ‘Columns’ key will hold a list of column names and data is a nested list that stores each row values in a list separated by a comma.


import pandas

# Create the dataframe using lists
remarks = pandas.DataFrame([[23,‘sravan’,‘pass’,1000],

# Convert to Dictionary without index and header




{‘index’: [0, 1], ‘columns’: [‘id’, ‘name’, ‘status’, ‘fee’], ‘data’: [[23, ‘sravan’, ‘pass’, 1000], [21, ‘sravan’, ‘fail’, 400]]}



We can see that two indices were stored in a list as a value to the key – ‘index’. Similarly, column names are also stored in a list as a value to the key – ‘columns’ and each row is stored as a list in a nested list to the ‘data’.

Example 4: to_dict() with ‘record’

If you convert your DataFrame to a Dictionary with each row as a Dictionary in a list, you can use the record parameter in the to_dict() method. Here, each row is placed in a dictionary such that the key will be the column name and value is the contemporáneo value in the pandas DataFrame. All rows were stored in a list.


import pandas

# Create the dataframe using lists
remarks = pandas.DataFrame([[23,‘sravan’,‘pass’,1000],

# Convert to Dictionary by record




[{‘id’: 23, ‘name’: ‘sravan’, ‘status’: ‘pass’, ‘fee’: 1000}, {‘id’: 21, ‘name’: ‘sravan’, ‘status’: ‘fail’, ‘fee’: 400}]


Example 5: to_dict() with ‘index’

Here, each row is placed in a dictionary as a value to the key starts from 0. All rows were stored again in a dictionary.


import pandas

# Create the dataframe using lists
remarks = pandas.DataFrame([[23,‘sravan’,‘pass’,1000],

# Convert to Dictionary with index




[{0: {‘id’: 23, ‘name’: ‘sravan’, ‘status’: ‘pass’, ‘fee’: 1000}, 1: {‘id’: 21, ‘name’: ‘sravan’, ‘status’: ‘fail’, ‘fee’: 400}}


Example 6: OrderedDict()

Let us utilize the ‘into’ parameter that will take OrderedDict, which converts the pandas DataFrame into an Ordered dictionary.

import pandas
from collections import *

# Create the dataframe using lists
remarks = pandas.DataFrame([[23,‘sravan’,‘pass’,1000],

# Convert to OrderedDict




OrderedDict([(‘id’, OrderedDict([(0, 23), (1, 21)])), (‘name’, OrderedDict([(0, ‘sravan’), (1, ‘sravan’)])), (‘status’, OrderedDict([(0, ‘pass’), (1, ‘fail’)])), (‘fee’, OrderedDict([(0, 1000), (1, 400)]))])



We have discussed how we can convert the dataframe or pandas objects into a python dictionary. We have seen the syntax of the to_dict() function to understand the parameters of this function and how you can modify the function’s output by specifying the function with different parameters. In the examples of this tutorial, we have used the to_dict() method, an inbuilt pandas function, to change the pandas objects to the python dictionary.

Source link

Ultimate Vocal Remover GUI Linux

Ultimate Vocal Remover is a free and open source GUI tool to remove vocals (and more) from audio files using deep neural networks. It’s available for Windows, macOS, and Linux.

The tool, advertised as “the best vocal remover application on the internet” by its developers, uses models trained by UVR’s developers for the most part (except for Demucs v1, v2, v3, and v4 4-stem and 6-stem models).

Ultimate Vocal Remover is an AI-powered tool that is designed to remove vocals from audio tracks. This can be useful for a variety of purposes, such as creating karaoke versions of songs, isolating instrumental parts of a track, or even removing unwanted vocals from a recording. 

While its main purpose is to remove any voice from audio tracks, the software can also perform some other tasks, depending on the model you’re using. For example (using the MDX-Net process method), it can also remove the instruments from an audio file.

Ultimate Vocal Remover can work with WAV files natively, and with other formats such as MP3, FLAC, OGG, and many others thanks to FFmpeg, and can output to WAV, FLAC or MP3. This means that users can easily remove vocals from their favorite songs, regardless of the format, and without having to convert the files themselves. 

You might also like: SonoBus Is An Open Source Low Latency Peer-To-Peer Audio Streaming Application

The software is also easy to use, with a straightforward interface that allows users to quickly and easily remove vocals from any audio track.

To use Ultimate Vocal Remover GUI to remove vocals or instruments from audio files:

  • select the desired input and output at the top of the GUI
  • choose the process method, e.g., MDX-Net to get a track that has only vocals or only instruments,
  • choose the model (the Choose Model dropdown has an option to download models; I’ve used UVR-MDX-NET Main in my test, and it worked great),
  • if you choose the MDX-Net process method, check the box to get a track that has Vocals Only or Instrumental Only,
  • optionally, check the box next to GPU Conversion if you’re using a supported Nvidia graphics card (see below),
  • and finally click Start Processing

There’s also a sample mode option if you want to do a test run (which defaults to 30 seconds of the song). You may also alter various settings by clicking the wrench icon that’s shown to the left of the Start Process button.

It’s worth noting that to be able to use the GPU for processing audio files while using this AI-powered tool, you’ll need Nvidia RTX 1060 6GB or more, with at least 8GBs of V-RAM being recommended. AMD Radeon GPUs are not yet supported, nor are platforms other than 64-bit. The application works without an Nvidia graphics card, but it will take more time to process (using my old Asus Zenbook with Intel i5-10210 CPU, it took about 15 minutes for a 3:40 track).

AI-related: Use ChatGPT From The Command Line With This Wrapper

Download / Install Ultimate Vocal Remover GUI

On Linux, you’ll need to install FFmpeg, Python3 PIP and TK, then install the requirements via PIP. It’s worth noting that the installed requirements take up more than 3GB of disk space, and you’ll also need some free space to download models to use with this AI-powered software.

FFmpeg-related: FFmpeg: Extract Audio From Video In Original Format Or Converting It To MP3 Or Ogg Vorbis

To run Ultimate Vocal Remover GUI on Linux, you’ll need to have some packages installed: FFmpeg (to use audio files that aren’t WAV), python3-pip and python3-tk. You can install these and run Ultimate Vocal Remover GUI by following the instructions below.

Install the dependencies:

  • Debian / Ubuntu / Linux Mint / Pop!_OS / etc.:
sudo apt install ffmpeg python3-pip python3-tk
  • Fedora (you’ll first need to enable the RPMFusion repositories to be able to install FFmpeg):
sudo dnf install ffmpeg python3-pip python3-tkinter
sudo pacman -S ffmpeg python-pip tk

Next, download the latest Ultimate Vocal Remover GUI repozitory zip from here (the latest release archive doesn’t include requirements.txt, it might work if you copy the one from the repository though), extract it, then open a terminal and navigate to its folder (e.g. cd ~/Downloads/ultimatevocalremovergui-master), and there run the following command to install its requirements via PIP:

python3 -m pip install --user -r requirements.txt

This will take some time as the software has some large dependencies. Merienda it’s done, you can run Ultimate Vocal Remover GUI by using the following command (in the folder where you’ve extracted the zip):


The site you’re reading this on is built using Zola (unless of course you’re reading this from some future date where I’ve decided to rebuild the site using something other than Zola, in which case how’s the future?) and hosted on Vercel (again, unless you’re reading this after that’s no longer the case). One of the neat features of Vercel is first-party support for various static-site generators, including the ability to control which version is used to render your site. When I was moving this site from Netlify to Vercel, I set the ZOLA_VERSION environment variable to the latest available version, 0.16.1, and was greeted with the following build log:

[16:16:06.599] Cloning (Branch: master, Commit: fcf0f09)
[16:16:07.092] Cloning completed: 493.016ms
[16:16:07.480] Looking up build cache...
[16:16:07.773] Build Cache not found
[16:16:07.807] Running "vercel build"
[16:16:08.310] Vercel CLI 28.2.5
[16:16:08.522] Installing Zola version 0.16.1
[16:16:08.953] zola: /lib64/ version `GLIBC_2.27' not found (required by zola)
[16:16:08.953] zola: /lib64/ version `GLIBC_2.29' not found (required by zola)
[16:16:08.953] zola: /lib64/ version `GLIBCXX_3.4.26' not found (required by zola)
[16:16:08.954] zola: /lib64/ version `GLIBC_2.28' not found (required by zola)
[16:16:08.954] zola: /lib64/ version `GLIBC_2.27' not found (required by zola)
[16:16:08.954] zola: /lib64/ version `GLIBC_2.29' not found (required by zola)
[16:16:08.954] Error: Command "zola build" exited with 1

Even though Zola is written in Rust, it still relies on glibc, the GNU C Library. The update to v15 changed how the Zola binary for Linux was built, causing it to rely on newer versions of glibc. After a few emails with Vercel’s support team, I confirmed that the build environment used by Vercel only had access to glibc 2.26, hence the errors when attempting to use the latest version of Zola.

Now, at this point, I had a few options if I wanted to use the latest version of Zola to build my site, but the easiest was probably setting up my Vercel project to download a custom-built version of Zola that was built against a lower version of glibc. While it certainly would have worked, and wouldn’t have been too much effort, it also wasn’t a fun or interesting solution.

Instead, I decided to see if I could compile Zola to WASM targeting the WebAssembly System Interface (WASI) and run it as a standard npm package.

Spoiler: I could!

With most Rust projects, compiling for WASI is relatively simple. You can run cargo build --target wasm32-wasi and get a neat .wasm file that will then run using WASI runtimes like node, Wasmtime, WasmEdge, and more. That is, unless the Rust project you’re compiling uses features that aren’t available in Rust’s WASI implementation (such as networking, which has some support, but not enough for large libraries like hyper). Zola, being a static site generator, heavily relies on networking support to provide the zola serve command, which allows you to preview your static site using a recinto web server. If I wanted to build a WASM version of Zola that could be used to build my site, I was going to need to remove all of the networking code.

One really neat aspect of Rust is its support for conditional compilation, which allows you to exclude code from being compiled based on a number of different conditions. One of those conditions is called «features», which are basically what it says on the tin: optional features of your application. This meant that I could mark complete sections of code as relying on the serve feature using the #[cfg(feature = "serve")] attribute. By making the serve feature a default feature, and compiling with the --no-default-features flag, I could make sure that any code that relied on networking was completely disabled.

However, networking isn’t the only feature of Rust that isn’t available in WASI. WebAssembly is single-threaded (although support for threads has been proposed), so code that relied on spawning threads was also not going to work in my WASM port. The main instance of this lack of support came from rayon, which is a data parallelism library that provides parallel loops that function the same way as sequential loops in the standard library. That parity is important, as I was able to basically provide an alternate implementation of rayon that simply used the sequential version of the method provided by rayon. This meant that even though the code was calling a .par_iter_mut() method, it was actually invoking the built-in .iter_mut() method.

Most features that don’t work on WASI will trigger compile-time errors, making it simple to make the necessary changes to get things compiling. However, there’s unfortunately some issues that are only exposed at runtime. One such issue came from a fairly innocuous looking piece of code:

pub fn is_path_in_directory(parent: &Path, path: &Path) -> Result<bool> {
    let canonical_path = path
        .with_context(|| format!("Failed to canonicalize {}", path.display()))?;
    let canonical_parent = parent
        .with_context(|| format!("Failed to canonicalize {}", parent.display()))?;


This piece of code checks to see if the provided path is contained by the provided parent. This is achieved by canonicalizing each path and comparing the prefixes. While the .canonicalize() method is provided when compiling to WASI, it will always error out since WASI doesn’t really have the concept of paths (at least not in the same way as most other operating systems think of them). Thankfully, the solution was to simply leave the path as is when running on WASI.

So far, most of the issues I ran into were relatively easy to fix; it simply took a moment to figure out what was causing the issue, and then tweaking the code to act a bit differently when it was running on WASI.

Unfortunately, there was a big roadblock ahead.

Zola used libsass.

For those of you who are unaware, I envy you. LibSass, like Nokogiri, is one of those dependencies that elicits long sighs from developers, primarily due to the fact that it’s a C/C++ library that honestly has no business being integrated into non-C/C++ projects. It’s even caused headaches for the Zola maintainers, outside the scope of my WASM port. I did make a solid effort to get it working; I played around with virtually every aspect of WASI SDK in an attempt to get it to compile. The main issue I was running into was that the Rust crate sass-rs needed to link to the C++ standard library (sound frecuente?). On Linux, this is usually provided by libstdc++. However, in WASI SDK, this is provided by libc++. I had to manually patch the file to always return cargo:rustc-link-lib=dylib=c++ in an attempt to ensure it was linked correctly. Even though I was able to get things linking correctly on the Rust side, it would fail when compiling to WASM. As I’ve mentioned on this blog before I have limited patience when it comes to C code, so I eventually gave up and switched Zola’s SASS implementation to grass, a Sass compiler written purely in Rust. It worked wonderfully, and only required the smallest of changes.

Merienda I had a WASM version of Zola, I then needed to wrap the module in a bit of setup code so that the node runtime could execute the module. This was (thankfully!) trivially easy. Here’s the complete implementation:

"use strict";
const { readFile } = require("node:fs/promises");
const { WASI } = require("wasi");
const { env } = require("node:process");
const { join } = require("node:path");

 * @param {string} siteDir Path to Zola site, relative to the current working directory
 * @param {string} [baseUrl]
module.exports = async function build(siteDir = ".", baseUrl) {
  let args = ["zola", "--root", "/", "build"];
  if (baseUrl) {
    args = [...args, "--base-url", baseUrl];
  const wasi = new WASI({
    preopens: {
      "/": join(process.cwd(), siteDir),
  const importObject = { wasi_snapshot_preview1: wasi.wasiImport };
  const wasm = await WebAssembly.compile(
    await readFile(join(__dirname, "zola.wasm"))
  const instance = await WebAssembly.instantiate(wasm, importObject);


Merienda this was published to npm, all I needed to do to run this on Vercel was to point a build script in my package.json to the following file:

import build from "@dstaley/zola-wasm";

const baseUrl =
  process.env.VERCEL_ENV === "production"
    ? ""
    : `https://${process.env.VERCEL_URL}`;

await build(".", baseUrl);

I dropped that into the repo for this site, created a pull request, and was greeted with the following output in the build log on Vercel:

Cloning (Branch: master, Commit: 78e53bf)
Cloning completed: 569.844ms
Restored build cache
Running "vercel build"
Vercel CLI 28.10.0
Installing dependencies...

up to date in 172ms

> build
> node --experimental-wasi-unstable-preview1 build.mjs

(node:328) ExperimentalWarning: WASI is an práctico feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
Building site...
Checking all internal links with anchors.
> Successfully checked 0 internal link(s) with anchors.
-> Creating 6 pages (0 orphan) and 1 sections
Done in 6.2s.

Build Completed in /vercel/output [14s]
Generated build outputs:
 - Static files: 40
 - Serverless Functions: 0
 - Edge Functions: 0
Deployed outputs in 1s
Build completed. Populating build cache...
Uploading build cache [4.35 MB]...
Build cache uploaded: 887.541ms
Done with "."

And with that I was able to build my site using the latest version of Zola on Vercel, despite the fact that Vercel didn’t have the latest version of glibc. The code for this is, of course, available on GitHub und on npm. Was this a good idea? No, probably not.

But it was a hell of a lot of fun.

El fabricante taiwanés ASRock, conocido sobre todo por sus placas cojín, ha anunciado una asociación con Canonical para comercializar dispositivos certificados para Ubuntu orientados a la Inteligencia Químico de las Cosas en la Frontera (que a partir de ahora abreviaremos como Edge AIoT). La intención del fabricante es comercializar productos que se beneficien de la funcionalidad y el soporte prolongado en el tiempo ofrecidos por Ubuntu, empezando por el iEP-5000G Industrial IoT Controller.

ASRock señala el rápido crecimiento del AIoT Edge y su acogida por parte de la comunidad en torno al Open Source. Aquí se suma la resistente presencia de Ubuntu en el segmento como uno de los sistemas operativos más conocidos del mundo y uno de los más empleados en el explicación de AIoT Edge. No hay que olvidar que Canonical pica piedra desde hace muchos años en los sectores del IoT y la computación en la frontera (edge computing), los cuales empiezan ahora a cruzarse con la inteligencia industrial.

Sobre el iEP-5000G Industrial IoT Controller, plataforma certificada para Ubuntu, ASRock explica que “cuenta con un stop poder de enumeración con E/S flexibles y expansiones bajo un diseño compacto y resistente como Edge Controller e IoT Gateway en varias aplicaciones Edge AI, incluida la fabricación inteligente, la automatización de procesos y los postes inteligentes en ciudades inteligentes. El hardware probado y validado por Ubuntu permite soporte a prolongado plazo con hasta 10 primaveras de actualizaciones de seguridad de Canonical, brindando la mejor y más confiable experiencia al cliente”.

Canonical y ASRock se asocian para comercializar productos certificados para Ubuntu orientados a la Inteligencia Artificial de las Cosas en la Frontera (Edge AIoT)

Otro punto resaltado por ASRock es que la disyuntiva de Ubuntu para un esquema de inteligencia industrial o enseñanza mecánico “puede ofrecer varios beneficios, como ser de código rajado, capacitación rápida en modelos de IA, soporte significativo de la comunidad, cobrar las últimas actualizaciones con seguridad sólida y más”.

En resumidas cuentas, ausencia que sea nuevo por parte de Canonical y su organización en torno al IoT, que tiene como principal puntal a un Ubuntu Core que no suele cobrar tanta atención mediática como las ediciones mutables para escritorio y servidores. La compañía detrás de la distribución más popular no quiere perder ritmo bajo ninguna circunstancia, así que se ha sumado con osadía al AIoT, un segmento relativamente fresco que viene de la fusión del Internet de las Cosas y la Inteligencia Químico.