Listado de la etiqueta: Linux


The article originally appeared on the Linux Foundation’s Training and Certification blog. The author is Marco Fioretti. If you are interested in learning more about microservices, consider some of our free training courses including Introduction to Cloud Infrastructure TechnologiesBuilding Microservice Platforms with TARS, and WebAssembly Actors: From Cloud to Edge.

Microservices allow software developers to design highly scalable, highly fault-tolerant internet-based applications. But how do the microservices of a platform actually communicate? How do they coordinate their activities or know who to work with in the first place? Here we present the main answers to these questions, and their most important features and drawbacks. Before digging into this topic, you may want to first read the earlier pieces in this series, Microservices: Definition and Main Applications, APIs in Microservices, and Introduction to Microservices Security.

Tight coupling, orchestration and choreography

When every microservice can and must talk directly with all its partner microservices, without intermediaries, we have what is called tight coupling. The result can be very efficient, but makes all microservices more complex, and harder to change or scale. Besides, if one of the microservices breaks, everything breaks.

The first way to overcome these drawbacks of tight coupling is to have one central controller of all, or at least some of the microservices of a platform, that makes them work synchronously, just like the conductor of an orchestra. In this orchestration – also called request/response pattern – it is the conductor that issues requests, receives their answers and then decides what to do next; that is whether to send further requests to other microservices, or pass the results of that work to external users or client applications.

The complementary approach of orchestration is the decentralized architecture called choreography. This consists of multiple microservices that work independently, each with its own responsibilities, but like dancers in the same ballet. In choreography, coordination happens without central supervision, via messages flowing among several microservices according to common, predefined rules.

That exchange of messages, as well as the discovery of which microservices are available and how to talk with them, happen via event buses. These are software components with well defined APIs to subscribe and unsubscribe to events and to publish events. These event buses can be implemented in several ways, to exchange messages using standards such as XML, SOAP or Web Services Description Language (WSDL).

When a microservice emits a message on a bus, all the microservices who subscribed to listen on the corresponding event bus see it, and know if and how to answer it asynchronously, each by its own, in no particular order. In this event-driven architecture, all a developer must code into a microservice to make it interact with the rest of the platform is the subscription commands for the event buses on which it should generate events, or wait for them.

Orchestration or Choreography? It depends

The two most popular coordination choices for microservices are choreography and orchestration, whose fundamental difference is in where they place control: one distributes it among peer microservices that communicate asynchronously, the other into one central conductor, who keeps everybody else always in line.

Which is better depends upon the characteristics, needs and patterns of real-world use of each platform, with maybe just two rules that apply in all cases. The first is that flagrante tight coupling should be almost always avoided, because it goes against the very idea of microservices. Loose coupling with asynchronous communication is a far better match with the fundamental advantages of microservices, that is independent deployment and maximum scalability. The positivo world, however, is a bit more complex, so let’s spend a few more words on the pros and cons of each approach.

As far as orchestration is concerned, its main disadvantage may be that centralized control often is, if not a synonym, at least a shortcut to a single point of failure. A much more frequent disadvantage of orchestration is that, since microservices and a conductor may be on different servers or clouds, only connected through the public Internet, performance may suffer, more or less unpredictably, unless connectivity is really excellent. At another level, with orchestration virtually any addition of microservices or change to their workflows may require changes to many parts of the platform, not just the conductor. The same applies to failures: when an orchestrated microservice fails, there will generally be cascading effects: such as other microservices waiting to receive orders, only because the conductor is temporarily stuck waiting for answers from the failed one. On the plus side, exactly because the “chain of command” and communication are well defined and not really flexible, it will be relatively easy to find out what broke and where. For the very same reason, orchestration facilitates independent testing of distinct functions. Consequently, orchestration may be the way to go whenever the communication flows inside a microservice-based platform are well defined, and relatively stable.

In many other cases, choreography may provide the best balanceo between independence of individual microservices, overall efficiency and simplicity of development.

With choreography, a service must only emit events, that is communications that something happened (e.g., a log-in request was received), and all its downstream microservices must only react to it, autonomously. Therefore, changing a microservice will have no impacts on the ones upstream. Even adding or removing microservices is simpler than it would be with orchestration. The flip side of this coin is that, at least if one goes for it without taking precautions, it creates more chances for things to go wrong, in more places, and in ways that are harder to predict, test or debug. Throwing messages into the Internet counting on everything to be fine, but without any way to know if all their recipients got them, and were all able to react in the right way can make life very hard for system integrators.

Conclusion

Certain workflows are by their own nature highly synchronous and predictable. Others aren’t. This means that many real-world microservice platforms could and probably should mix both approaches to obtain the best combination of performance and resistance to faults or peak loads. This is because temporary peak loads – that may  be best handled with choreography – may happen only in certain parts of a platform, and the faults with the most serious consequences, for which tighter orchestration could be safer, only in others (e.g. purchases of single products by end customers, vs orders to buy the same products in bulk, to restock the warehouse) . For system architects, maybe the worst that happens could be to design an architecture that is either orchestration or choreography, but without being really conscious (maybe because they are just porting to microservices a pre-existing, monolithic platform) of which one it is, thus getting nasty surprises when something goes wrong, or new requirements turn out to be much harder than expected to design or test. Which leads to the second of the two común rules mentioned above: don’t even start to choose between orchestration or choreography for your microservices, before having the best possible estimate of what their positivo world loads and communication needs will be.



Source link


“Almost everything productive we can do in Linux requires us to have a network connection. Whether developing apps, installing software, scripting, sharing files, or even watching movies, we need a working network connection. Hence, “I require a network connection” is simply an understatement. The only way to enable network connection on a machine is through a network interface.

A network interface is a device or a point of connection between a device and a private or public network. In most cases, a network interface is a physical card such as a wireless adapter, a network card, etc. However, this does not necessarily mean that a network interface should be a physical device. For example, a loopback adapter that is not physically visible is implemented by software and available on all devices.”

This quick tutorial will show you how to set the default interface in Linux.

Method 1 – Turn Off Adapters

The simplest way to set your default network interface is by disabling all other interfaces. For example, in Linux, you can use the GUI network manager or the terminal.

Suppose you have a wireless adapter and you wish to use the Ethernet adapter; in that case, you can bring down the wifi adapter using the command:

$ sudo ifconfig wlan0 down
$ sudo ifconfig eth0 up

The above commands will shut down the wireless adapter and bring up the ethernet adapter.

That will force the system to switch to the available network.

NOTE: The above command requires sudo or root privileges with the net-tools package installed.

Start by using the command:

The command above should list the default gateways available in the system, including the default interface.

An example output is as shown:

default via 192.168.0.1 dev wlan0 proto dhcp metric 100
169.254.0.0/16 dev wlan0 scope link metric 1000
192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.10 metric 100

As we can see from the output above, the default interface is set to wlan0. However, we can change this by following a few steps.

Start by removing all the default interfaces with the command:

$ sudo ip route flush 0/0

The command should remove all the default gateways. You can verify by running the ip list command:

An example output:

169.254.0.0/16 dev wlan0 scope link metric 1000
192.168.0.0/24 dev wlan0 proto kernel scope link src 192.168.0.10 metric 100

We can now proceed to add a default interface using the ip route command.

$ sudo ip route add default via 192.168.0.2 dev eth0

NOTE: Ensure to replace the IP address of the interface with your desired one.

Merienda executed successfully, the command should add the interface eth0 is the default.

We can verify this by running the ip route command:

$ sudo ip route list
default via 192.168.0.2 dev eth0
169.254.0.0/16 dev eth0 scope link metric 1000
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.10 metric 100

The output shows that the default interface is set to eth0 with our specified IP address.

Conclusion

That’s it for this one. In this article, we discussed how to change your default interface in Linux in two primary methods.

Thanks for reading!!



Source link


At last week’’s Open Source Summit North America, Robin Ginn, Executive Director of the OpenJS Foundation, relayed a principle her mentor taught: “1+1=3”. No, this isn’t ‘new math,’ it is demonstrating the principle that, working together, we are more impactful than working apart. Or, as my wife and I say all of the time, teamwork makes the dream work. 

This principle is really at the core of open source technology. Turns out it is also how I look at the Open Programmable Infrastructure project. 

Stepping back a bit, as “the new guy” around here, I am still constantly running across projects where I want to dig in more and understand what it does, how it does it, and why it is important. I had that very thought last week as we launched another new project, the Open Programmable Infrastructure Project. As I was reading up on it, they talked a lot about data processing units (DPUs) and infrastructure processing units (IPUs), and I thought, I need to know what these are and why they matter. In the timeless words of The Bobs, “What exactly is it you do here?” 

What are DPUs/IPUs? 

First – and this is important – they are basically the same thing, they just have different names. Here is my oversimplified explanation of what they do.

In most personal computers, you have a separate graphic processing unit(s) that helps the central 1 + 1 = 3 processing unit(s) (CPU) handle the tasks related to processing and displaying the graphics. They offload that work from the CPU, allowing it to spend more time on the tasks it does best. So, working together, they can achieve more than each can separately. 

Servers powering the cloud also have CPUs, but they have other tasks that can consume tremendous computing  power, say data encryption or network packet management. Offloading these tasks to separate processors enhances the performance of the whole system, as each processor focuses on what it does best. 

In order words, 1+1=3. 

DPUs/IPUs are highly customizable

While separate processing units have been around for some time, like your PC’s GPU, their functionally was primarily dedicated to a particular task. Instead, DPUs/IPUs combine multiple offload capabilities that are highly  customizable through software. That means a hardware manufacturer can ship these units out and each organization uses software to configure the units according to their specific needs. And, they can do this on the fly. 

Core to the cloud and its continued advancement and growth is the ability to quickly and easily create and dispose of the “hardware” you need. It wasn’t too long ago that if you wanted a server, you spent thousands of dollars on one and built all kinds of infrastructure around it and hoped it was what you needed for the time. Now, pretty much anyone can quickly setup a supuesto server in a matter of minutes for virtually no initial cost. 

DPUs/IPUs bring this same type of flexibility to your own datacenter because they can be configured to be “specialized” with software rather than having to literally design and build a different server every time you need a different capability. 

What is Open Programmable Infrastructure (OPI)?

OPI is focused on utilizing  open software and standards, as well as frameworks and toolkits, to allow for the rapid adoption and use of DPUs/IPUs. The OPI Project is both hardware and software companies coming together to establish and nurture an ecosystem to support these solutions. It “seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.”

In other words, competitors are coming together to agree on a common, open ecosystem they can build together and innovate, separately, on top of. The are living out 1+1=3.

I, for one, can’t wait to see the innovation.



Source link


Gaming on Linux has become very popular and gained the trust of hardcore gamers in very short period of time. Thanks to digital video game distribution services like Steam and PlayOnLinux, it has been possible for gamers like me to enjoy my favourite video games from Windows on Linux and its distributions.

Now that we’re in mid-2022, there are many games from popular developers and publishers available for Linux and its distributions like Ubuntu. But games from popular publishers guarantees one thing and that is price tag and some games are very expensive too. So, today I’m going to introduce you to the 10 free games for Linux.

1. Dota 2

I know many of you won’t be surprised to see Dota 2 on top of the list. No game can replace this game at the top and you will not read any gaming related blog without mentioning this very popular game. Dota 2 is one of the most popular battle arena game available for Linux. It is also free.

Dota 2 is published by Steam and it is one of the most played game on the platform. If you want to compete with top gamers around the world, then this game offers you the opportunity as this game is part of eSport competitions.

Valve keeps on updating the game from time to time with new features and content to keep the game fresh and always exciting.

Download Here

2. Super Tux Kart

Super Tux Kart is free to play kart racing game for Linux and its distributions. It is an open-source arcade racing game with variety of gameplay modes, characters and tracks. The gameplay is quiebro realistic and a fun experience while playing.

Game also has story mode where as a player you must face the Nolok and defeat him to keep the kingdom safe. There is also time trail mode where you can challenge yourself to beat your own fastest time.

There are three main characters in the game: Tux which is a brave penguin the hero of SuperTuxKart; Gnu who is mentor of Tux; and Nolok, the villain.

Download Here

3. Alien Arena: Warriors of Mars

Alien Arena is a first-person shooter game available on Steam for Linux. Developed by COR Entertainment, it is powered by Quake II engine and Open Dynamics engine. This game can be played in single player as well as multiplayer game modes.

It is free-to-play game and it is very popular on other operating systems platforms such as Microsoft Windows and macOS. The gameplay is full of action making it thrilling till the very end. Game features variety of battle environments along with incredible graphics.

Download Here  

 4. Unknown Horizons

Do you love city building games right from their economies? Then I have a special real-time economy simulator and city builder game for you. Unknown Horizons is the 2D simulator game for you with features like urban development, commodities management, diplomacy, trade, strategy and exploration.

It is an exciting game where you can build a high-profile patrón city from scratch and find new islands, trade routes, diplomatic strategies to raise your city’s wealth.

Download Here

5. Team Fortress 2

This one is for FPS game lovers. Team Fortress 2 is very popular first-person shooter video game developed and published by Valve. Even though it is free video game, Team Fortress 2 developers keep on pushing updates with gameplay features time to time.

Even if you’re new to this game, it ensures you settle in well with detailed training and offline practice modes. Gameplay modes include Capture the Flag, Control Point, Payload, Arena, King of the hill, and many more.

Download Here

6. Counter Strike: Integral Offensive

Counter Strike: Integral Offensive popularly known as CS: GO is another excellent game published by Valve Corporation. It is first-person shooter video game available for free on various platforms including Steam for Linux.

Game was initially released in 2012. Since then, Valve corporation has been pushing major updates every year. Game is also very popular among esports gamers and game is played in various competitions every year.

Download Here   

7. Mari0

Every gamer in his childhood has played popular Mario game. Diferente Super Mareo Bros developed and published by Nintendo and Mari0 replica of that with some tweaks with the help of Valve.

Mari0 has features such as portal gun to shoot portals, 4-player simultaneous coop with each player having his own portal gun, 33 different hats, game modifiers, level editor, and many more.

Download Here

8. Zero Ballistics

Zero Ballistics is free first-person shooter tank combat game available for Windows and Linux. This game is easy to learn but difficult to master as gameplay gets intense with every level you cross.

Game features 81 different tank setups which keeps you ready for any combat situation during game play. Gameplay is quiebro engaging with masting main weapon ballistic trajectory and judging the distance correctly is easy.

Download Here   

9. World of Warships: Transformers

World of Warships: Transfers is another free to play game available on Steam platform for Linux. You take control of one of the 300 historic vessels from WWI and WWII like lowa, Bismarck and Yamato to combat the enemies.

You can play this game in solo mode or even with your friends in multiplayer mode. Combats and engaging which demands your full concentration else you can get bombarded by enemy anytime. You can customize you fleet with options such as ship modules, camouflages, flags and many more.

Download Here

10. Cube 2: Sauerbraten

Cube 2: Sauerbraten is free multiplayer first person shooter successor of the Cube first person shooter game. It is an old-school deathmatch game, where gameplay is exciting and engaging.

It is a cross-platform game, hence also available for Windows, macOS X, FreeBSD and OpenBSD alongside Linux and its distros. Action game lovers are going to like this game with features like map and geometry editing tools.

Download Here

So, here are the 10 best games which you can play on Linux and its various distribution for free. There are many other free games available for Linux with ever increasing competition among Linux game developers. But only few manage to stand out and rule the hearts of gamers like us!



Source link


Linux is an operating system that comes with different distributions like Ubuntu, Debian, and Arch Linux. Just like macOS and Windows, Linux is also a popular operating system that is installed on computers and laptops to manage the hardware of the respective machine and perform the different tasks requested by the users.

In this guide, different ways of installing or putting the Linux operating system on a laptop have been discussed.

How to put Linux on a laptop

There are two methods to install Linux on a laptop which are:

  • Using the USB
  • Using the Supuesto Machine

How to download a Linux operating system ISO file on a laptop

For both above methods, we have to download the ISO file from the official website of your specified Linux distribution. For example, for a better understanding, we will download the ISO file of Ubuntu by visiting its official website:

How to put Linux using the USB on a laptop

For this method, we have to make the USB bootable by flashing the ISO file of Linux on the USB and then attaching the USB to the laptop. Reboot the computer and open the boot menu and from there we will install the operating system.

The detailed explanation is described in the steps below:

Step 1: Flashing the Linux on a USB
Attach the USB with the computer and open any flasher, we will open the balenaEtcher, will launch it, choose the ISO file and then the USB device on which we want to flash it, and finally, flash the image on it:

Step 2: Reboot the computer and open the boot menu
When the image of Linux has been flashed on the USB, open the boot menu, it is important to tell here that the boot menu key is unique for every machine so search it on Google about the boot menu key. Click on it and then select to boot from USB:

The Linux operating system has been booted:

Then it will ask you either to install the Linux on your laptop or simply use it:

Now if you want to install it click on the “Install Ubuntu” and proceed by following the simple steps. Or you can try it by running it from USB by clicking on “Try Ubuntu”.

How to put Linux using the posible machine on a laptop

The other method is to install the posible machine and make a new machine in Ubuntu. For this launch the posible machine, and click on the “New” machine:

Name your machine as we are naming it “Ubuntu”:

Assign RAM memory to the newly created machine and click on the “Next” button:

Choose the hard disk type:

Choose the physical storage type:

And finally, create a machine by assigning some hard disk memory to it:

Then run the machine by clicking on the “Start” menu:

For the next steps follow the instructions displayed on the screen or read this article.

Conclusion

An operating system is used to manage the hardware of the laptop and perform different tasks for users by using the hardware of laptops. There are different operating systems, among which one is the Linux operating system popular in the World and in this guide, methods of putting Linux on a laptop have been explained.



Source link


iPXE is a modern PXE firmware that works for the BIOS and UEFI motherboards. It can download the required boot files using many protocols, such as TFTP, FTP, HTTP, HTTPS, and NFS. Also, iPXE can boot from iSCSI SAN (Storage Area Network), Fibre Channel SAN via FCoE, and AoE SAN. iPXE can boot operating system installer images and full operating systems without requiring any HDD/SSD installed on the host (iSCSI SAN boot). Diskless booting with iPXE is very easy to configure.In addition, iPXE supports scripting. You can control the boot process with iPXE scripts stored on a remote server. Thus, iPXE script is a very powerful tool for dynamic boot management with iPXE.

For more information on iPXE, visit the official website of iPXE.

This article will show you how to compile iPXE and configure your Synology NAS as a PXE Boot server for booting Linux installation images over the network via iPXE. As iPXE supports BIOS and UEFI motherboards, I will show you how to configure the iPXE Boot server on your Synology NAS for PXE booting on BIOS and UEFI motherboards.

Plus, I will demonstrate how to configure the iPXE Boot server for booting the installation images of the following Linux distributions:

Now, let’s get started.

  1. Creating a pxeboot Shared Folder
  2. Enabling Access to the NAS Files via HTTP/HTTPS
  3. Enabling NFS for the web Shared Folder
  4. Enabling the TFTP Service
  5. Installing DHCP Server
  6. Enabling DHCP for a Network Interface
  7. Booting Ubuntu Installer in Live Mode
  8. Installing Required Dependencies for Building iPXE on Ubuntu Desktop Live
  9. Cloning iPXE Git Repository
  10. Enabling iPXE NFS, HTTPS, and FTP Protocol Support
  11. Creating an iPXE Embedded Boot Configuration File
  12. Compiling iPXE for BIOS-Based Motherboards
  13. Compiling iPXE for UEFI-Based Motherboards
  14. Uploading the Required Files to the NAS
  15. Creating Default iPXE Boot Configuration File
  16. Enabling PXE on Synology NAS
  17. Basics of iPXE Boot Configuration File
  18. PXE Booting Ubuntu Desktop 20.04 LTS Live With iPXE
  19. PXE Booting Ubuntu Server 20.04 LTS With iPXE
  20. PXE Booting Ubuntu Desktop 22.04 LTS Live With iPXE
  21. PXE Booting Ubuntu Server 22.04 LTS With iPXE
  22. PXE Booting Fedora 36 Workstation Live With iPXE
  23. Conclusion
  24. References

Creating a pxeboot Shared Folder

To keep all the iPXE Boot files organized, you should create a new shared folder, pxeboot, as shown in the screenshot below.

If you need any assistance in creating a new shared folder, read How to Setup Synology NAS?

Enabling Access to the NAS Files via HTTP/HTTPS

iPXE can download the required boot files and iPXE configuration files (a.k.a iPXE scripts) from a web server using the HTTP/HTTPS protocol.

NOTE: iPXE HTTPS support is not enabled by default. You will have to enable it manually before compiling iPXE. Check Enabling iPXE NFS, HTTPS, and FTP Protocol Support for more information.

To set up a web server on your Synology NAS, you will have to install the Web Station package on your Synology NAS from the Package Center app. Merienda you install the Web Station package, you will be able to access the iPXE configuration files (iPXE scripts) and required operating system kernels (and boot files) from your Synology NAS via HTTP/HTTPS.

To install Web Station on your Synology NAS, open the Package Center app, search for Web Station, and click on the Web Station package.

Click on Install.

The Web Station package should be installed.

Merienda Web Station is installed, A new shared folder web should be automatically created, as shown in the screenshot below. You can access any files stored in this shared folder via HTTP/HTTPS.

Enabling NFS for the web Shared Folder

You will also need to enable the NFS file service and configure the web shared folder for NFS access for PXE booting to work for some Linux distributions (i.e., Ubuntu).

To enable the NFS file service, navigate to Control Panel > File Services.

From the NFS tab, check the Enable NFS service checkbox, as marked in the following screenshot:

Click on Apply for the changes to take effect.

The NFS file service should be enabled.

Now, navigate to Control Panel > Shared Folder, select the web shared folder, and click on Edit as marked in the following screenshot:

Click on Create from the NFS Permissions tab.

Type in * in the Hostname or IP section1, check the Allow connections from non-privileged ports (ports higher than 1024) checkbox2, check the Allow users to access mounted subfolders checkbox3, and click on Save4.

A new NFS access rule should be created1.

The shared folder can be accessed using the path /volume1/web, as shown in the screenshot below2. Remember the shared folder path as you will need it later.

For the changes to take effect, click on Save3.

Enabling the TFTP Service

To serve the iPXE Boot firmware and configuration files (iPXE scripts) to the PXE clients, you must enable the TFTP file service on your Synology NAS.

To enable the TFTP file service, navigate to Control Panel > File Services.

From the Advanced tab, scroll down to the TFTP section and check the Enable TFTP service checkbox, as marked in the following screenshot:

Click on Select as marked in the following screenshot to set a TFTP root folder:

All the shared folders of your Synology NAS should be listed. Select the pxeboot shared folder and click on Select.

Click on Apply for the changes to take effect.

The TFTP file service should be enabled, and the TFTP root folder should be set.

Installing DHCP Server

For PXE booting to work, you will need a working DHCP server.

To install a DHCP server on your Synology NAS, open the Package Center app1, search for the keyword dhcp2, and click on the DHCP Server package, as marked in the following screenshot3:

Click on Install.

The DHCP Server package should be installed.

Merienda the DHCP Server package is installed, you can start it from the Application Menu of the DSM web interface of your Synology NAS.

The DHCP Server app should be opened. You can configure the DHCP server and enable PXE booting with iPXE from here.

Enabling DHCP for a Network Interface

To enable DHCP, open the DHCP Server app, select a network interface from the Network Interface section, and click Edit, as marked in the following screenshot:

Check the Enable DHCP server checkbox from the DHCP Server tab, as marked in the following screenshot:

Type in your desired Primary DNS and Secondary DNS servers. I am using 8.8.8.8 as the Primary DNS and 1.1.1.1 as the Secondary DNS server1.

From the Subnet list section, click on Create2.

You will be asked to create a DHCP subnet.

Usually, your home router will have a DHCP server running. You can’t turn it off as you need it for your home network devices (i.e., laptops, desktops, smartphones, and IoT devices). To get a working DHCP server on your Synology NAS without turning off the DHCP server of your home router, you will have to create the same DHCP subnet on your Synology NAS as your home router. You will have two DHCP servers, but the one configured on your Synology NAS will supply the required files for PXE booting. No matter which DHCP server your home-networking devices use, everything will work fine as they will be on the same subnet.

Type in your desired Start IP address1, End IP address2, Netmask3, and Gateway4, depending on the subnet of your home router.

My home router is using the subnet 192.168.0.0/24, and its IP address is 192.168.0.1. So, I have used the Gateway address 192.168.0.1 and Netmask 255.255.255.0. The Start and End IP addresses can be anything within the subnet. I have used the Start IP address 192.168.0.200 and the End IP address 192.168.0.230 in this case.

Type in 3600 (an hour) as the Address lease time5. It is the time the DHCP server will reserve an IP address for a DHCP client.

Merienda you’re done, click on Create6.

A new subnet should be created, as shown in the following screenshot:

Check the Enabled checkbox to enable the subnet and click on OK, as marked in the following screenshot:

Click on Yes.

DHCP should be enabled for your selected network interface.

Booting Ubuntu Installer in Live Mode

To compile iPXE from source code, you will need a Linux computer. I recommend you create a bootable USB thumb drive using the official Ubuntu Desktop 22.04 LTS ISO image and boot Ubuntu Desktop 22.04 LTS on your computer in Live mode from the USB thumb drive. If you need any assistance in creating a Ubuntu Desktop bootable USB thumb drive, check the article Installing Ubuntu Desktop 20.04 LTS.

Installing Required Dependencies for Building iPXE on Ubuntu Desktop Live

Merienda you’ve booted Ubuntu Desktop 22.04 LTS in Live mode on your computer, you will have to install all the required build tools and dependency packages for compiling iPXE.

Some of the dependency packages are available in the official universe repository of Ubuntu. So, you will have to enable the official universe package repository with the following command:

$ sudo apt-add-repository universe

To confirm the action, press .

The official universe package repository should be enabled, and the APT package repository cache should start updating. It will take a few seconds to complete.

At this point, the APT package repository cache should be updated.

To install all the required build tools and dependency packages for compiling iPXE, run the following command:

$ sudo apt install build-essential liblzma-dev isolinux git

To confirm the installation, press Y and then press .

The required packages are being downloaded from the internet. It will take a while to complete.

Merienda downloaded, the APT package manager will install them one by one. It could take a while to complete.

At this point, all the required packages should be installed.

Cloning iPXE Git Repository

Now that your Ubuntu Desktop Live is ready to compile iPXE, it’s time to download the iPXE source code.

First, navigate to the ~/Downloads directory as follows:

To clone the iPXE code repository from GitHub, run the following command:

$ git clone https://github.com/ipxe/ipxe.git

The iPXE GitHub repository is being cloned. It could take a few seconds to complete.

At this point, the iPXE GitHub repository should be cloned.

A new directory ipxe/ should be created in the ~/Downloads directory, as shown in the following screenshot:

Navigate to the ipxe/src/ directory as follows:

You should see a lot of directories there containing the iPXE source code.

Enabling iPXE NFS, HTTPS, and FTP Protocol Support

iPXE can download iPXE Boot configuration files (iPXE scripts) and operating system kernels using many protocols, such as HTTP, HTTPS, TFTP, FTP, and NFS. But not all of these protocols (i.e., HTTPS, FTP, and NFS) are enabled by default. But, if needed, you can modify the ipxe/src/config/militar.h header file to allow any of these protocols.

You can open the config/militar.h header file with the nano text editor as follows:

Scroll down to the Download protocols section1, and you should see some lines with the text DOWNLOAD_PROTO_*.

The DOWNLOAD_PROTO_* line starting with #define enables the respective download protocol. In the same way, the DOWNLOAD_PROTO_* line starting with #undef disables the respective download protocol.

To enable the HTTPS protocol, change #undef to #define for DOWNLOAD_PROTO_HTTPS2.

To enable the FTP protocol, change #undef to #define for DOWNLOAD_PROTO_FTP3.

To enable the NFS protocol, change #undef to #define for DOWNLOAD_PROTO_NFS4.

I have enabled the NFS protocol for demonstration, as you can see in the following screenshot.

Merienda you’ve enabled the required download protocols, press + X followed by Y and to save the militar.h header file.

Creating an iPXE Embedded Boot Configuration File

To configure iPXE to automatically boot from an iPXE Boot script stored on your Synology NAS, you need to create an iPXE Boot script and embed it with the iPXE firmware when you compile it.

Create an iPXE Boot script bootconfig.ipxe and open it with the nano text editor as follows:

Type in the following lines of codes in the following bootconfig.ipxe file:

#!ipxe

dhcp

chain tftp://192.168.0.114/config/boot.ipxe

Merienda you’re done, save the file by pressing + X followed by Y and .

NOTE: Here, 192.168.0.114 is the IP address of my Synology NAS. Don’t forget to replace it with yours. If you need any assistance in finding the IP address of your Synology NAS, read the article How Do I Find the IP Address of My Synology NAS?

Now, you’re ready to compile iPXE.

Compiling iPXE for BIOS-Based Motherboards

For BIOS-based motherboards, iPXE provides a few iPXE firmware files for PXE booting. They are: ipxe.pxe, undionly.kpxe, undionly.kkpxe, undionly.kkkpxe, etc.

Not all of these iPXE firmware work on every BIOS-based motherboard. If you’re using a BIOS-based motherboard, you can try each one and see which one works for you. I recommend you start with ipxe.pxe firmware. If it does not work, try the undionly.kpxe firmware. If that also does not work, then try the undionly.kkpxe firmware. Finally, if that does not work either, try the undionly.kkkpxe firmware.

You can compile the ipxe.pxe firmware and embed the bootconfig.ipxe iPXE script with the following command:

$ make bin/ipxe.pxe EMBED=bootconfig.ipxe

The ipxe.pxe firmware file is being compiled. It could take a few seconds to complete.

The ipxe.pxe firmware should be compiled at this point.

You can compile the undionly.kpxe firmware and embed the bootconfig.ipxe iPXE script with the following command:

$ make bin/undionly.kpxe EMBED=bootconfig.ipxe

The undionly.kpxe firmware should be compiled.

You can compile the undionly.kkpxe firmware and embed the bootconfig.ipxe iPXE script with the following command:

$ make bin/undionly.kkpxe EMBED=bootconfig.ipxe

The undionly.kkpxe firmware should be compiled.

You can compile the undionly.kkkpxe firmware and embed the bootconfig.ipxe iPXE script with the following command:

$ make bin/undionly.kkkpxe EMBED=bootconfig.ipxe

The undionly.kkkpxe firmware should be compiled.

You can find all the compiled iPXE firmware files for BIOS-based motherboards in the bin/ directory as shown in the following screenshot:

$ ls -lh bin/{ipxe.pxe,undionly.kpxe,undionly.kkpxe,undionly.kkkpxe}
[/c]

c
<img class=«wp-image-187071» src=«https://linuxhint.com/wp-content/uploads/2022/06/word-image-186659-66.png» />
<h2><a id=«post-186659-_Toc103306321»></a>Compiling iPXE for UEFI-Based Motherboards</h2>
For UEFI-based motherboards, you will need to compile only the iPXE firmware file <strong>ipxe.efi</strong> for PXE booting.

You can compile the <strong>ipxe.efi</strong> firmware and embed the <strong>bootconfig.ipxe</strong> iPXE script with the following command:
[cc lang=«bash» width=«100%» height=«100%» escaped=«true» theme=«blackboard» nowrap=«0»]
$ make bin-x86_64-efi/ipxe.efi EMBED=bootconfig.ipxe

The ipxe.efi firmware file is being compiled. It could take a few seconds to complete.

The ipxe.efi firmware file is being compiled.

The ipxe.efi firmware should be compiled at this point.

You can find the compiled iPXE firmware file for UEFI-based motherboards in the bin-x86_64-efi/ directory, as you can see in the following screenshot:

$ ls -lh bin-x86_64-efi/ipxe.efi

Uploading the Required Files to the NAS

Merienda the iPXE Boot firmware files are compiled, copy them to the ~/Downloads directory so that you can easily upload them to your Synology NAS.

$ cp -v bin/{ipxe.pxe,undionly.kpxe,undionly.kkpxe,undionly.kkkpxe} bin-x86_64-efi/ipxe.efi ~/Downloads

The iPXE Boot firmware files (ipxe.pxe, undionly.kpxe, undionly.kkpxe, undionly.kkkpxe, and ipxe.efi) are copied to the ~/Downloads directory, as shown in the following screenshot:

Drag and drop all the iPXE Boot firmware files in the pxeboot shared folder of your Synology NAS.

Creating Default iPXE Boot Configuration File

I have compiled iPXE in a way (using the bootconfig.ipxe embedded iPXE script) that merienda the iPXE Boot firmware is loaded on a PXE client, it will look for the iPXE boot configuration file boot.ipxe in the config/ directory of the pxeboot shared folder of your Synology NAS.

To create a config/ folder on the pxeboot shared folder, navigate to the pxeboot shared folder using the File Station app and click on Create > Create folder as marked in the following screenshot:

Type in config as the folder name and click on OK.

A new folder config should be created.

Create/Upload a new iPXE Boot configuration file (iPXE script) boot.ipxe here and type in the following lines in it.

If iPXE Boot firmware successfully loads on a PXE client and it downloads and runs the iPXE script boot.ipxe, you will see the message Welcome to iPXE on the screen. This will help you ensure the iPXE is working as expected.

Enabling PXE on Synology NAS

Merienda you have created the default iPXE Boot configuration file (iPXE script) config/boot.ipxe, you can enable PXE booting on your Synology NAS.

Open the DHCP Server app, navigate to the PXE section, and check the Enable PXE (Pre-boot Execution Environment) checkbox, as marked in the following screenshot:

Merienda PXE is enabled, select Restringido TFTP server, and click on Select.

All the iPXE Boot firmware files uploaded in the pxeboot shared folder should be listed.

For BIOS-based motherboards, you can select any of the iPXE Boot firmware files ipxe.pxe, undionly.kpxe, undionly.kkpxe, and undionly.kkkpxe. I recommend you select the ipxe.pxe firmware file first. If that does not work, try the undionly.kpxe firmware file. If that also doesn’t work, try the undionly.kkpxe firmware file. Finally, try the undionly.kkkpxe firmware file.

For UEFI-based motherboards, select the iPXE Boot firmware file ipxe.efi and click on Select.

Merienda you’ve selected an iPXE Boot firmware, click on Apply.

PXE should be enabled, and your desired iPXE Boot firmware should be set as the PXE Boot loader.

Now, if you boot your computer via PXE, you should see the following iPXE window and the message Welcome to iPXE. It means that the PXE booting with the iPXE Boot firmware is working just fine.

Basics of iPXE Boot Configuration File

This section will show you how to write a basic iPXE Boot configuration file or iPXE script to boot multiple operating system installation images over the network with iPXE.

An example of an iPXE Boot configuration file (or iPXE script) config/boot.ipxe (on your pxeboot shared folder) with multiple boot menu entries should look as follows:

Here, lines 3 and 4 are used to set two configuration settings: http_server_ip and nfs_server_ip. These two configuration settings set the webserver IP address (http_server_ip) and NFS server IP address (nfs_server_ip). You should set them to the IP address of your Synology NAS. If you need any assistance in finding the IP address of your Synology NAS, read the article How Do I Find the IP Address of My Synology NAS?

NOTE: Configuration settings are like variables in iPXE scripts. For more information on the set command, visit the official documentation of iPXE.

Line 5 is used to set the configuration setting nfs_root_path. Then, the nfs_root_path is used to set the NFS path of the web shared folder of your Synology NAS. To find the NFS path of the web shared folder, check this article’s Enabling NFS for the web Shared Folder section.

Lines 7–13 are used to create an iPXE boot menu. Lines starting with the item command are used to create boot menu entries. You can have as many boot menu entries as you want in an iPXE boot menu.

In this example, I have three boot menu entries (lines 9, 10, and 11):

The item command is used to create a boot menu entry in the following format:

item <label> <display-text>

<display-text> is the text to be displayed in the iPXE boot menu.

<label> is the name/label of the code section where iPXE will jump to when the menu item/entry is selected.

For more information on the item command, visit the official iPXE documentation.

For example, line 9 displays the text Operating System 1 on the iPXE boot menu. When this boot menu entry is selected, it will jump to the code section os1.

You can define a named/labeled code section os1 (let’s say) for the menu item Operating System 1 as follows:

Merienda you select a menu item, it will execute only the named/labeled section of code defined in that menu item.

So, the menu entry Operating System 1 will execute the code section named/labeled os1 merienda selected.

The same goes for the Operating System 2 menu entry.

And the Operating System 3 menu entry.

For a working iPXE boot menu configuration, look at the Booting Ubuntu Desktop 20.04 LTS Live via iPXE section.

 

PXE Booting Ubuntu Desktop 20.04 LTS Live With iPXE

First, download the Ubuntu Desktop 20.04 LTS ISO image from the official release page of Ubuntu 20.04 LTS.

Merienda the Ubuntu Desktop 20.04 LTS ISO image is downloaded, upload it to the web shared folder of your Synology NAS.

Right-click on the Ubuntu Desktop 20.04 LTS ISO image and click on Mount Potencial Drive, as marked in the following screenshot:

Make sure that the ISO image is mounted in the web shared folder1. Also, ensure to check the Mount automatically on startup checkbox so that the ISO image will be mounted automatically when your Synology NAS boots2. Then, click on Mount to mount the ISO image3.

The ISO image of Ubuntu Desktop 20.04 LTS should be mounted in the web shared folder, as you can see in the following screenshot:

NOTE: Remember the folder name where Ubuntu Desktop 20.04 LTS ISO image is mounted, as you will need it later to set the os_root configuration settings in the config/boot.ipxe file. In this case, ubuntu-20.04.4-desktop-amd64 is the mounted folder name.

The contents of the mounted Ubuntu Desktop 20.04 LTS ISO image.

To PXE boot Ubuntu Desktop 20.04 LTS using the iPXE Boot firmware, you will have to add a menu entry for Ubuntu Desktop 20.04 LTS and the required boot code on the config/boot.ipxe configuration file that you have created in the pxeboot shared folder.

Type in the following lines in the config/boot.ipxe configuration file to PXE boot Ubuntu Desktop 20.04 LTS using the iPXE Boot firmware:

#!ipxe

set http_server_ip 192.168.0.114

set nfs_server_ip 192.168.0.114

set nfs_root_path /volume1/web

menu Select an OS to boot

item ubuntu-desktop-2004-nfs Ubuntu Desktop 20.04 LTS (NFS)

choose –default exit –timeout 10000 option && goto ${option}

:ubuntu-desktop-2004-nfs

set os_root ubuntu-20.04.4-desktop-amd64

kernel nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/casper/vmlinuz

initrd nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/casper/initrd

imgargs vmlinuz initrd=initrd boot=casper maybe-ubiquity netboot=nfs ip=dhcp nfsroot=${nfs_server_ip}:${nfs_root_path}/${os_root} quiet splash

boot

Merienda you’ve added a menu entry for Ubuntu Desktop 20.04 LTS and the required boot code, the config/boot.ipxe iPXE boot configuration file should look as follows:

Set the os_root configuration setting to the folder’s name where the Ubuntu Desktop 20.04 LTS ISO image is mounted.

Now, boot your computer via PXE, and you should see the following iPXE boot menu.

Select Ubuntu Desktop 20.04 LTS (NFS) and press <Enter>.

You should see that the vmlinuz and initrd files are downloaded from the PXE Boot server running on your Synology NAS.

Ubuntu Desktop 20.04 LTS Live is being booted.

Ubuntu Desktop 20.04 LTS Live is being booted.

Merienda Ubuntu Desktop 20.04 LTS Live is booted, you should see the following window. You can install Ubuntu Desktop 20.04 LTS on your computer from here. If you need any assistance in installing Ubuntu Desktop 20.04 LTS on your computer, read the article Installing Ubuntu Desktop 20.04 LTS.

Ubuntu Desktop 20.04 LTS PXE booted in live mode using the iPXE Boot firmware.

PXE Booting Ubuntu Server 20.04 LTS With iPXE

First, download the Ubuntu Server 20.04 LTS ISO image from the official release page of Ubuntu 20.04 LTS.

Merienda the Ubuntu Server 20.04 LTS ISO image is downloaded, upload it to the web shared folder of your Synology NAS.

Right-click on the Ubuntu Server 20.04 LTS ISO image and click on Mount Potencial Drive, as marked in the following screenshot:

Make sure that the ISO image is mounted in the web shared folder1. Also, ensure to check the Mount automatically on startup checkbox so that the ISO image will be mounted automatically when your Synology NAS boots2. Then, click on Mount to mount the ISO image3.

The ISO image of Ubuntu Server 20.04 LTS should be mounted in the web shared folder, as shown in the following screenshot:

NOTE: Remember the folder name where Ubuntu Server 20.04 LTS ISO image is mounted as you will need it later to set the os_root configuration settings in the config/boot.ipxe file. In this case, ubuntu-20.04.4-live-server-amd64 is the mounted folder name.

The contents of the mounted Ubuntu Server 20.04 LTS ISO image.

To PXE boot Ubuntu Server 20.04 LTS using the iPXE Boot firmware, you will have to add a menu entry for Ubuntu Server 20.04 LTS on the config/boot.ipxe configuration file that you have created in the pxeboot shared folder.

Add a menu entry for Ubuntu Server 20.04 LTS and type the required boot code in the config/boot.ipxe configuration file to PXE boot Ubuntu Server 20.04 LTS using the iPXE Boot firmware:

menu Select an OS to boot

item ubuntu-desktop-2004-nfs Ubuntu Desktop 20.04 LTS (NFS)

item ubuntu-server-2004-nfs Ubuntu Server 20.04 LTS (NFS)

choose –default exit –timeout 10000 option && goto ${option}

:ubuntu-server-2004-nfs

set os_root ubuntu-20.04.4-live-server-amd64

kernel nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/casper/vmlinuz

initrd nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/casper/initrd

imgargs vmlinuz initrd=initrd netboot=nfs ip=dhcp nfsroot=${nfs_server_ip}:${nfs_root_path}/${os_root} quiet

boot

Merienda you’ve added a menu entry for Ubuntu Server 20.04 LTS and typed in the required boot code, the config/boot.ipxe iPXE Boot configuration file should look as follows:

Make sure to set the os_root configuration setting to the folder’s name where the Ubuntu Server 20.04 LTS ISO image is mounted.

Now, boot your computer via PXE and you should see the following iPXE boot menu.

Select Ubuntu Server 20.04 LTS (NFS) and press <Enter>.

You should see that the vmlinuz and initrd files are downloaded from the PXE Boot server running on your Synology NAS.

Ubuntu Server 20.04 LTS is being booted.

Merienda Ubuntu Server 20.04 LTS is booted, you should see the following window. You can install Ubuntu Server 20.04 LTS on your computer/server from here. If you need assistance installing Ubuntu Server 20.04 LTS on your computer/server, read the article Installing Ubuntu Server 20.04 LTS.

PXE Booting Ubuntu Desktop 22.04 LTS Live With iPXE

First, download the Ubuntu Desktop 22.04 LTS ISO image from the official release page of Ubuntu 22.04 LTS.

Merienda the Ubuntu Desktop 22.04 LTS ISO image is downloaded, upload it to the web shared folder of your Synology NAS.

Right-click on the Ubuntu Desktop 22.04 LTS ISO image and click on Mount Potencial Drive, as marked in the following screenshot:

Make sure that the ISO image is mounted in the web shared folder1. Also, ensure to check the Mount automatically on startup checkbox so that the ISO image will be mounted automatically when your Synology NAS boots2. Then, click on Mount to mount the ISO image3.

The ISO image of Ubuntu Desktop 22.04 LTS should be mounted in the web shared folder as shown in the following screenshot:

NOTE: Remember the folder name where the Ubuntu Desktop 22.04 LTS ISO image is mounted as you will need it later to set the os_root configuration settings in the config/boot.ipxe file. In this case, ubuntu-22.04-desktop-amd64 is the mounted folder name.

The contents of the mounted Ubuntu Desktop 22.04 LTS ISO image.

To PXE boot Ubuntu Desktop 22.04 LTS using the iPXE Boot firmware, you will have to add a menu entry for Ubuntu Desktop 22.04 LTS on the config/boot.ipxe configuration file that you have created in the pxeboot shared folder.

Add a menu entry for Ubuntu Desktop 22.04 LTS and type in the required boot code in the config/boot.ipxe configuration file to PXE boot Ubuntu Desktop 22.04 LTS using the iPXE Boot firmware:

menu Select an OS to boot

item ubuntu-desktop-2004-nfs Ubuntu Desktop 20.04 LTS (NFS)

item ubuntu-server-2004-nfs Ubuntu Server 20.04 LTS (NFS)

item ubuntu-desktop-2204-nfs Ubuntu Desktop 22.04 LTS (NFS)

choose –default exit –timeout 10000 option && goto ${option}

:ubuntu-desktop-2204-nfs

set os_root ubuntu-22.04-desktop-amd64

kernel nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/casper/vmlinuz

initrd nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/casper/initrd

imgargs vmlinuz initrd=initrd boot=casper maybe-ubiquity netboot=nfs ip=dhcp nfsroot=${nfs_server_ip}:${nfs_root_path}/${os_root} quiet splash

boot

Merienda you’ve added a menu entry for Ubuntu Desktop 22.04 LTS and typed in the required boot code, the config/boot.ipxe iPXE Boot configuration file should look as follows:

Make sure to set the os_root configuration setting to the folder’s name where the Ubuntu Desktop 22.04 LTS ISO image is mounted.

Now, boot your computer via PXE and you should see the following iPXE boot menu.

Select Ubuntu Desktop 22.04 LTS (NFS) and press <Enter>.

You should see that the vmlinuz and initrd files are being downloaded from the PXE Boot server running on your Synology NAS.

Ubuntu Desktop 22.04 LTS Live is being booted.

Merienda Ubuntu Desktop 22.04 LTS Live is booted, you should see the following window. You can install Ubuntu Desktop 22.04 LTS on your computer from here. If you need any assistance in installing the Ubuntu Desktop 22.04 LTS on your computer, read the article Installing Ubuntu Desktop 20.04 LTS. Although the article is for Ubuntu Desktop 20.04 LTS, it may still be helpful.

Ubuntu Desktop 22.04 LTS PXE booted in live mode using the iPXE Boot firmware.

PXE Booting Ubuntu Server 22.04 LTS With iPXE

First, download the Ubuntu Server 22.04 LTS ISO image from the official release page of Ubuntu 22.04 LTS.

Merienda the Ubuntu Server 22.04 LTS ISO image is downloaded, upload it to the web shared folder of your Synology NAS.

Right-click on the Ubuntu Server 22.04 LTS ISO image and click on Mount Potencial Drive, as marked in the following screenshot:

Make sure that the ISO image is mounted in the web shared folder1. Also, ensure to check the Mount automatically on startup checkbox so that the ISO image will be mounted automatically when your Synology NAS boots2. Then, click on Mount to mount the ISO image3.

The ISO image of Ubuntu Server 22.04 LTS should be mounted in the web shared folder as shown in the following screenshot:

NOTE: Remember the folder name where Ubuntu Server 22.04 LTS ISO image is mounted as you will need it later to set the os_root configuration settings in the config/boot.ipxe file. In this case, ubuntu-22.04-live-server-amd64 is the mounted folder name.

The contents of the mounted Ubuntu Server 22.04 LTS ISO image.

To PXE boot Ubuntu Server 22.04 LTS using the iPXE Boot firmware, you will have to add a menu entry for Ubuntu Server 22.04 LTS on the config/boot.ipxe configuration file that you have created in the pxeboot shared folder.

Add a menu entry for Ubuntu Server 22.04 LTS and type in the required boot code in the config/boot.ipxe configuration file to PXE boot Ubuntu Server 22.04 LTS using the iPXE Boot firmware:

menu Select an OS to boot

item ubuntu-desktop-2004-nfs Ubuntu Desktop 20.04 LTS (NFS)

item ubuntu-server-2004-nfs Ubuntu Server 20.04 LTS (NFS)

item ubuntu-desktop-2204-nfs Ubuntu Desktop 22.04 LTS (NFS)

item ubuntu-server-2204-nfs Ubuntu Server 22.04 LTS (NFS)

choose –default exit –timeout 10000 option && goto ${option}

:ubuntu-server-2204-nfs

set os_root ubuntu-22.04-live-server-amd64

kernel nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/casper/vmlinuz

initrd nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/casper/initrd

imgargs vmlinuz initrd=initrd netboot=nfs ip=dhcp nfsroot=${nfs_server_ip}:${nfs_root_path}/${os_root} quiet

boot

Merienda you’ve added a menu entry for Ubuntu Server 22.04 LTS and typed in the required boot code, the config/boot.ipxe iPXE Boot configuration file should look as follows:

Make sure to set the os_root configuration setting to the folder’s name where the Ubuntu Server 22.04 LTS ISO image is mounted.

Now, boot your computer via PXE and you should see the following iPXE boot menu.

Select Ubuntu Server 22.04 LTS (NFS) and press <Enter>.

You should see that the vmlinuz and initrd files are being downloaded from the PXE Boot server running on your Synology NAS.

Ubuntu Server 22.04 LTS is being booted.

Merienda Ubuntu Server 22.04 LTS is booted, you should see the following window. You can install Ubuntu Server 22.04 LTS on your computer/server from here. If you need any assistance in installing Ubuntu Server 22.04 LTS on your computer/server, read the article Installing Ubuntu Server 20.04 LTS. Although the article is for Ubuntu Server 20.04 LTS, it may still be helpful.

PXE Booting Fedora 36 Workstation Live With iPXE

First, download the Fedora Workstation 36 ISO image from the official downloads page of Fedora Workstation.

Merienda the Fedora Workstation 36 ISO image is downloaded, upload it to the web shared folder of your Synology NAS.

Right-click on the Fedora Workstation 36 ISO image and click on Mount Potencial Drive, as marked in the following screenshot:

Make sure that the ISO image is mounted in the web shared folder1. Also, ensure to check the Mount automatically on startup checkbox so that the ISO image will be mounted automatically when your Synology NAS boots2. Then, click on Mount to mount the ISO image3.

The ISO image of Fedora Workstation 36 Live should be mounted in the web shared folder, as shown in the screenshot below.

NOTE: Remember the folder name where Fedora Workstation 36 Live ISO image is mounted as you will need it later to set the os_root configuration settings in the config/boot.ipxe file. In this case, Fedora-Workstation-Live-x86_64-36-1.5 is the mounted folder name.

The contents of the mounted Fedora Workstation 36 Live ISO image.

To PXE boot Fedora Workstation 36 Live using the iPXE Boot firmware, you will have to add a menu entry for Fedora Workstation 36 Live on the config/boot.ipxe configuration file that you have created in the pxeboot shared folder. Fedora Workstation can be PXE booted using the NFS protocol and the HTTP/HTTPS protocol. This section shows you how to PXE boot Fedora Workstation using the NFS and HTTP protocols.

If you want to PXE boot Fedora Workstation 36 Live with the iPXE Boot firmware using the NFS protocol, add a menu entry for Fedora Workstation 36 Live and type in the required boot code in the config/boot.ipxe configuration file as follows:

menu Select an OS to boot

item ubuntu-desktop-2004-nfs Ubuntu Desktop 20.04 LTS (NFS)

item ubuntu-server-2004-nfs Ubuntu Server 20.04 LTS (NFS)

item ubuntu-desktop-2204-nfs Ubuntu Desktop 22.04 LTS (NFS)

item ubuntu-server-2204-nfs Ubuntu Server 22.04 LTS (NFS)

item fedora-workstation-36-live-nfs Fedora Workstation 36 Live (NFS)

choose –default exit –timeout 10000 option && goto ${option}

:fedora-workstation-36-live-nfs

set os_root Fedora-Workstation-Live-x86_64-361.5

kernel nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/images/pxeboot/vmlinuz

initrd nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/images/pxeboot/initrd.img

imgargs vmlinuz initrd=initrd.img ip=dhcp rd.live.image root=live:nfs://${nfs_server_ip}${nfs_root_path}/${os_root}/LiveOS/squashfs.img

boot

Merienda you’ve added a menu entry for Fedora Workstation 36 Live and typed in the required boot code for booting Fedora Workstation using the NFS protocol, the config/boot.ipxe iPXE Boot configuration file should look as follows:

If you want to PXE boot Fedora Workstation 36 Live with the iPXE Boot firmware using the HTTP protocol, add a menu entry for Fedora Workstation 36 Live and type in the required boot code in the config/boot.ipxe configuration file as follows:

menu Select an OS to boot

item ubuntu-desktop-2004-nfs Ubuntu Desktop 20.04 LTS (NFS)

item ubuntu-server-2004-nfs Ubuntu Server 20.04 LTS (NFS)

item ubuntu-desktop-2204-nfs Ubuntu Desktop 22.04 LTS (NFS)

item ubuntu-server-2204-nfs Ubuntu Server 22.04 LTS (NFS)

item fedora-workstation-36-live-nfs Fedora Workstation 36 Live (NFS)

item fedora-workstation-36-live-http Fedora Workstation 36 Live (HTTP)

choose –default exit –timeout 10000 option && goto ${option}

:fedora-workstation-36-live-http

set os_root Fedora-Workstation-Live-x86_64-361.5

initrd http://${http_server_ip}/${os_root}/images/pxeboot/initrd.img

kernel http://${http_server_ip}/${os_root}/images/pxeboot/vmlinuz initrd=initrd.img ip=dhcp rd.live.image root=live:http://${http_server_ip}/${os_root}/LiveOS/squashfs.img

boot

Merienda you’ve added a menu entry for Fedora Workstation 36 Live and typed in the required boot code for booting Fedora Workstation using the HTTP protocol, the config/boot.ipxe iPXE Boot configuration file should look as follows:

Make sure to set the os_root configuration setting to the folder’s name where the Fedora Workstation 36 Live ISO image is mounted.

Now, boot your computer via PXE and you should see the following iPXE boot menu.

Select either Fedora Workstation 36 Live (NFS) or Fedora Workstation 36 Live (HTTP) and press <Enter>.

If you have selected Fedora Workstation 36 Live (NFS), you should see that the vmlinuz and initrd.img files are being downloaded from the PXE Boot server running on your Synology NAS using the NFS protocol.

If you have selected Fedora Workstation 36 Live (HTTP), you should see that the vmlinuz and initrd.img files are being downloaded from the PXE Boot server running on your Synology NAS using the HTTP protocol.

Fedora Workstation 36 Live is being booted.

Merienda Fedora Workstation 36 Live is booted, you should see the following window. You can install Fedora Workstation 36 on your computer from here. If you need any assistance installing Fedora Workstation 36 on your computer, read the article How to Install Fedora Workstation 35 from USB. Although the article was published several months ago, it will still be helpful.

Fedora Workstation 36 PXE booted in live mode using the iPXE Boot firmware.

Fedora Workstation 36 installer merienda PXE booted using the iPXE Boot firmware.

Conclusion

This article discussed how to configure the TFTP, HTTP (webserver), and NFS file services on your Synology NAS for PXE booting. I have shown you how to compile iPXE (for BIOS and UEFI motherboards) and copy the necessary iPXE Boot firmware files to your Synology NAS. I also provided a guide on how to install and configure the DHCP Server package for PXE booting on BIOS/UEFI systems over the network with iPXE. Finally, I have shown you how to add the necessary iPXE boot menu entries and the required boot codes for PXE booting the following Linux distributions with iPXE:

  • Ubuntu Desktop 20.04 LTS
  • Ubuntu Server 20.04 LTS
  • Ubuntu Desktop 22.04 LTS
  • Ubuntu Server 22.04 LTS
  • Fedora Workstation 36

References

  1. https://ipxe.org/download
  2. https://ipxe.org/embed
  3. https://ipxe.org/appnote/buildtargets
  4. https://ipxe.org/cmd/set
  5. https://ipxe.org/cmd/menu
  6. https://ipxe.org/cmd/item
  7. https://ipxe.org/cmd/choose
  8. https://ipxe.org/cmd/kernel
  9. https://ipxe.org/cmd/imgfetch?redirect=1
  10. https://ipxe.org/cmd/imgargs
  11. https://forum.ipxe.org/showthread.php?tid=6989
  12. https://medium.com/@peter.bolch/how-to-netboot-with-ipxe-6a41db514dee
  13. https://medium.com/@peter.bolch/how-to-netboot-with-ipxe-6191ed711348
  14. http://manpages.ubuntu.com/manpages/bionic/man7/casper.7.html
  15. https://anaconda-installer.readthedocs.io/en/latest/boot-options.html



Source link


The State of Open Source Security Highlights Many Organizations Lacking Strategies to Address Application Vulnerabilities Arising from Code Reuse

BOSTON — June 21, 2022 — Snyk, the leader in developer security, and The Linux Foundation, a completo nonprofit organization enabling innovation through open source, today announced the results of their first joint research report, The State of Open Source Security.

The results detail the significant security risks resulting from the widespread use of open source software within modern application development as well as how many organizations are currently ill-prepared to effectively manage these risks. Specifically, the report found:

  • Over four out of every ten (41%) organizations don’t have high confidence in their open source software security;
  • The media application development project has 49 vulnerabilities and 80 direct dependencies (open source code called by a project); and,
  • The time it takes to fix vulnerabilities in open source projects has steadily increased, more than doubling from 49 days in 2018 to 110 days in 2021.

“Software developers today have their own supply chains – instead of assembling car parts,  they are assembling code by patching together existing open source components with their unique code. While this leads to increased productivity and innovation, it has also created significant security concerns,” said Matt Jarvis, Director, Developer Relations, Snyk. “This first-of-its-kind report found widespread evidence suggesting industry naivete about the state of open source security today. Together with The Linux Foundation, we plan to leverage these findings to further educate and equip the world’s developers, empowering them to continue building fast, while also staying secure.”

“While open source software undoubtedly makes developers more efficient and accelerates innovation, the way modern applications are assembled also makes them more challenging to secure,” said Brian Behlendorf, Normal Manager, Open Source Security Foundation (OpenSSF). “This research clearly shows the risk is positivo, and the industry must work even more closely together in order to move away from poor open source or software supply chain security practices.” (You can read the OpenSSF’s blog post about the report here)

Snyk and The Linux Foundation will be discussing the report’s full findings as well as recommended actions to improve the security of open source software development during a number of upcoming events:

41% of Organizations Don’t Have High Confidence in Open Source Software Security

Modern application development teams are leveraging code from all sorts of places. They reuse code from other applications they’ve built and search code repositories to find open source components that provide the functionality they need. The use of open source requires a new way of thinking about developer security that many organizations have not yet adopted.

Further consider:

  • Less than half (49%) of organizations have a security policy for OSS development or usage (and this number is a mere 27% for medium-to-large companies); and,
  • Three in ten (30%) organizations without an open source security policy openly recognize that no one on their team is currently directly addressing open source security.

Media Application Development Project: 49 Vulnerabilities Spanning 80 Direct Dependencies

When developers incorporate an open source component in their applications, they immediately become dependent on that component and are at risk if that component contains vulnerabilities. The report shows how positivo this risk is, with dozens of vulnerabilities discovered across many direct dependencies in each application evaluated.

This risk is also compounded by indirect, or transitive, dependencies, which are the dependencies of your dependencies. Many developers do not even know about these dependencies, making them even more challenging to track and secure.

That said, to some degree, survey respondents are aware of the security complexities created by open source in the software supply chain today:

  • Over one-quarter of survey respondents noted they are concerned about the security impact of their direct dependencies;
  • Only 18% of respondents said they are confident of the controls they have in place for their transitive dependencies; and,
  • Forty percent of all vulnerabilities were found in transitive dependencies.

Time to Fix: More Than Doubled from 49 Days in 2018 to 110 Days in 2021

As application development has increased in complexity, the security challenges faced by development teams have also become increasingly complex. While this makes development more efficient, the use of open source software adds to the remediation burden. The report found that fixing vulnerabilities in open source projects takes almost 20% longer (18.75%) than in proprietary projects.

About The Report

The State of Open Source Security is a partnership between Snyk and The Linux Foundation, with support from OpenSSF, the Cloud Native Security Foundation, the Continuous Delivery Foundation and the Decliver Foundation. The report is based on a survey of over 550 respondents in the first quarter of 2022 as well as data from Snyk Open Source, which has scanned more than 1.3B open source projects.

About Snyk

Snyk is the leader in developer security. We empower the world’s developers to build secure applications and equip security teams to meet the demands of the digital world. Our developer-first approach ensures organizations can secure all of the critical components of their applications from code to cloud, leading to increased developer productivity, revenue growth, customer satisfaction, cost savings and an overall improved security posture. Snyk’s Developer Security Platform automatically integrates with a developer’s workflow and is purpose-built for security teams to collaborate with their development teams. Snyk is used by 1,500+ customers worldwide today, including industry leaders such as Asurion, Google, Intuit, MongoDB, New Relic, Revolut, and Salesforce.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.



Source link


Data Processing and Infrastructure Processing Units – DPU and IPU – are changing the way enterprises deploy and manage compute resources across their networks; OPI will nurture an ecosystem to enable easy adoption of these innovative technologies 

SAN FRANCISCO, Calif.,  – June 21, 2022 The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the new Open Programmable Infrastructure (OPI) Project. OPI will foster a community-driven, standards-based open ecosystem for next-generation architectures and frameworks based on DPU and IPU technologies. OPI is designed to facilitate the simplification of network, storage and security APIs within applications to enable more portable and performant applications in the cloud and datacenter across DevOps, SecOps and NetOps. 

Founding members of OPI include Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA and Red Hat with a growing number of contributors representing a broad range of leading companies in their fields ranging from silicon and device manufactures, ISVs, test and measurement partners, OEMs to end users. 

“When new technologies emerge, there is so much opportunity for both technical and business innovation but barriers often include a lack of open standards and a thriving community to support them,” said Mike Dolan, senior vice president of Projects at the Linux Foundation. “DPUs and IPUs are great examples of some of the most promising technologies emerging today for cloud and datacenter, and OPI is poised to accelerate adoption and opportunity by supporting an ecosystem for DPU and IPU technologies.

DPUs and IPUs are increasingly being used to support high-speed network capabilities and packet processing for applications like 5G, AI/ML, Web3, crypto and more because of their flexibility in managing resources across networking, compute, security and storage domains. Instead of the servers being the infrastructure unit for cloud, edge or the data center, operators can now create pools of disaggregated networking, compute and storage resources supported by DPUs, IPUs, GPUs, and CPUs to meet their customers’ application workloads and scaling requirements.

OPI will help establish and nurture an open and creative software ecosystem for DPU and IPU-based infrastructures. As more DPUs and IPUs are offered by various vendors, the OPI Project seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.  The project intends to:

  • Define DPU and IPU, 
  • Delineate vendor-agnostic frameworks and architectures for DPU- and IPU-based software stacks applicable to any hardware solutions, 
  • Enable the creation of a rich open source application ecosystem,
  • Integrate with existing open source projects aligned to the same vision such as the Linux kernel, and, 
  • Create new APIs for interaction with, and between, the elements of the DPU and IPU ecosystem, including hardware, hosted applications, host node, and the remote provisioning and orchestration of software

With several working groups already active, the initial technology contributions will come in the form of the Infrastructure Programmer Development Kit (IPDK) that is now an official sub-project of OPI governed by the Linux Foundation. IPDK is an open source framework of drivers and APIs for infrastructure offload and management that runs on a CPU, IPU, DPU or switch. 

In addition, NVIDIA DOCA , an open source software development framework for NVIDIA’s BlueField DPU, will be contributed to OPI to help developers create applications that can be offloaded, accelerated, and isolated across DPUs, IPUs, and other hardware platforms. 

For more information visit: https://opiproject.org; start contributing here: https://github.com/opiproject/opi.

Founding Member Comments

Geng Lin, EVP and Chief Technology Officer, F5

“The emerging DPU market is a golden opportunity to reimagine how infrastructure services can be deployed and managed. With collective collaboration across many vendors representing both the silicon devices and the entire DPU software stack, an ecosystem is emerging that will provide a low friction customer experience and achieve portability of services across a DPU enabled infrastructure layer of next generation data centers, private clouds, and edge deployments.”

Patricia Kummrow, CVP and GM, Ethernet Products Group, Intel

Intel is committed to open software to advance collaborative and competitive ecosystems and is pleased to be a founding member of the Open Programmable Infrastructure project, as well as fully supportive of the Infrastructure Processor Development Kit (IPDK) as part of OPI. We look forward to advancing these tools, with the Linux Foundation, fulfilling the need for a programmable infrastructure across cloud, data center, communication and enterprise industries making it easier for developers to accelerate innovation and advance technological developments.

Ram Periakaruppan, VP and Común Manager, Network Test and Security Solutions Group, Keysight Technologies 

“Programmable infrastructure built with DPUs/IPUs enables significant innovation for networking, security, storage and other areas in disaggregated cloud environments. As a founding member of the Open Programmable Infrastructure Project, we are committed to providing our test and validation expertise as we collaboratively develop and foster a standards-based open ecosystem that furthers infrastructure development, enabling cloud providers to maximize their investment.”

Cary Ussery, Vice President, Software and Support, Processors, Marvell

Data center operators across multiple industry segments are increasingly incorporating DPUs as an integral part of their infrastructure processing to offload complex workloads from caudillo purpose to more robust compute platforms. Marvell strongly believes that software standardization in the ecosystem will significantly contribute to the success of workload acceleration solutions. As a founding member of the OPI Project, Marvell aims to address the need for standardization of software frameworks used in provisioning, lifecycle management, orchestration, virtualization and deployment of workloads.

Kevin Deierling, vice president of Networking at NVIDIA 

“The fundamental architecture of data centers is evolving to meet the demands of private and hyperscale clouds and AI, which require extreme performance enabled by DPUs such as the NVIDIA BlueField and open frameworks such as NVIDIA DOCA. These will support OPI to provide BlueField users with extreme acceleration, enabled by common, multi-vendor management and applications. NVIDIA is a founding member of the Linux Foundation’s Open Programmable Infrastructure Project to continue pushing the boundaries of networking performance and accelerated data center infrastructure while championing open standards and ecosystems.”

Erin Boyd, director of emerging technologies, Red Hat

“As a founding member of the Open Programmable Infrastructure project, Red Hat is committed to helping promote, grow and collaborate on the emergent advantage that new hardware stacks can bring to the cloud-native community, and we believe that the formalization of OPI into the Linux Foundation is an important step toward achieving this in an open and transparent fashion. Establishing an open standards-based ecosystem will enable us to create fully programmable infrastructure, opening up new possibilities for better performance, consumption, and the ability to more easily manage unique hardware at scale.”

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

 

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. Red Hat is a registered trademark of Red Hat, Inc. or its subsidiaries in the U.S. and other countries.

Marvell Disclaimer: This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Presente events or results may differ materially from those contemplated in this press release. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.

Media Contact
Carolyn Lehman
The Linux Foundation
clehman@linuxfoundation.org



Source link


What is Kinit Command in Kerberos, and What Does it do?

The Kinit in Linux is a command often used for renewing or caching/renewing a Kerberos ticket authentication and granting features. This tool is used for the same purpose that MIT and SEAM References use Kinit in other Kerberos implementations. Notably, you can only use the Kinit command merienda you register as a principal with the KDC or Key Distribution Center.

Ideally, the KDC alternatives, often identified by {realms} and {kdcdefault} features contained in the kdc.conf (which is the KBR5 configuration file), come in handy if you do not indicate any ticket flags in the command line.

This article describes what a Kinix Linux command is. It also provides a step-by-step guide on using the Kinit tool to renew, obtain, or cache your ticket-granting tickets. Finally, we will highlight Kinit syntax or flags, environment variables, and files.

How to Authenticate With Kinit

One of the measures you should always take after installing Kerberos on your system is to check if all the packages exist. Again, you will have to test it from the server and user machines. Merienda done successfully, you can proceed and authenticate with Kinit using the following steps.

Step 1: Confirm if the Kinit Tool Exists
Initially, we confirm if Kerberos installation was successful in the system by executing the following commands on the console.

Step 2: Configure the krb5.conf File
After confirming Kerberos exists in the system, the next step is to configure krb5.conf in the /KenHint/krb5.conf file. If the file does not exist, the user can create one and confirm if the port name and host address are similar. The file should look like this.

Step 3: Validate the Initialization of the Kerberos Server
The next procedure is to validate if the Kerberos server is running, then try getting a ticket for any users in the serve. For this demonstration, we will fetch a ticket for user KenHint. Our password for the user will be LinHint.

Of course, it is also possible to obtain tickets using the Klist Linux tool, but this is not for this write-up.

Kinit Command- Description and Flags/ Flags

Using Kinit on Linux effectively begins with understanding what it is. And as you will find out, the Kinit command efficiently reinitializes the credentials cache if you are not renewing any existing files. Thus, the result will be a new ticket-granting ticket from the KDC.

Also, failure to specify the Principal in your command line but specify the –s flag, the action will prompt the system to obtain the Principal name from the credentials cache. Besides, the new credentials cache will become your default cache unless you use the –c flag to state the cache name.

The Kinit syntax or flags feature the following denotations;

[-V] [-l lifetime] [-s] [-r] [-p | -P] [-f or -F] [-a] / [-A] [-C] [-E] [-v] [-R] [-k [-t] [-c cache_name] [-n] [-S] [-T armor_ccache] [-X [=value]] [principal]

These initials stand for the following;

  • -V to display the verbose output
  • -l lifetime requests tickets with the lifetime “lifetime” and will always come before time delimiters such as s (seconds), m (minutes), h (hours), and d (days).
  • -s start_time requests postcard tickets that are valid at the start_time
  • -r renewable_life effectively requests renewable tickets
  • -p fetches proxiable tickets
  • -P cannot request proxiable tickets
  • -f for forwardable tickets
  • -F does not request forwardable tickets
  • -a fetches tickets with recinto addresses
  • -A fetches tickets without addresses
  • -C provides canonization of the principal name
  • -E changes your principal name into an enterprise name
  • -v helps to validate ticket-granting tickets through the KDC.
  • -R renews ticket-granting tickets
  • -k [-t keytab_file fetches tickets from the host key tab file
  • -c cache_name –n helps with anonymous processing
  • -S service_name specifies alternate service names for getting initial tickets
  • -T armor_ccache identifies the name of the cache that already has a ticket
  • [-X attribute[=value]] specifies a pre-authorization value and attribute for pre-authorization plugins

The ticket period value for –s, -l and are –l flags are denoted as ndnhnmns where n associate a number, d- represent the number of days, -h denotes hours, m denotes the number of minutes, and –s expresses the seconds. Besides, the n preceding each denotation stands for a number, i.e., 90h will be 90 hours.

The codes below create a renewable ticket named KenHint. This ticket has a lifetime of 10 hours and is renewable in 5 days.

Kinit Environment Variable and Files

Kinit is among the Kerberos commands operational within the KRB5CCNAME environment variable. The environment has the following major Kinit files;

Files

  • /usr/krb5/bin/knit is the initial file that is the container or workspace
  • /var/krb5/security/creds/krb5cc_ [KenHint] file is the initial default cache and KenHint is the user
  • /etc/krb5/krb5.keytab is a file for the initial location of the recinto hosting key tab file
  • /var/krb5/krb5kdc/kdc.conf file is the Kerberos key distribution center configuration folder.

Kinit Command Examples

Common Kinit command examples include;

  1. The below Kinit command comes in handy in requesting valid credentials valid for authentication from the host.
  2. Initial ticket request
  3. Renew a ticket:

Conclusion

The Kinit command in Kerberos Linux consists of an array of flags. It comes in handy in a variety of applications. It is ideal for requesting valid credentials, proxiable credentials, forwarded credentials, and renewing tickets. You will also find it helpful to display the Kinit help menu whenever you experience a problem.

Sources:



Source link


“CMake is a popular and helpful cross-platform, open-source set of tools that utilize compiler and platform-independent configuration files to build, test, and package projects. CMake was developed as the solution for a cross-platform build space for open-source projects.

CPack is a packaging tool that is cross-platform and distributed by CMake. However, it can be used independently of CMake, but it uses generator concepts from the CMake suite tool.

This guide covers the installation and usage of CMake and CPack.”

How to Install CMake

You can install CMake via the command line or the Ubuntu Software Centre. In this case, we will install it via the command line, but you can check the complete CMake installation methods for more details.

First, update your repository.

Next, enter the command below to install CMake.

$ sudo snap install cmake –classic

You can confirm the installation by checking its version.

Using CMake to Configure, Build, and Debug a C++ Project on Linux

CMake is popular for managing code builds for C++ projects, and it does so with the help of the CMakeLists.txt files for each directory. They define the tasks the build system should undertake.

In our case, we will write a simple C++ program using Visual Studio Code and build it using CMake.

Ensure you have the C++ Visual Studio extension installed, a debugger, preferably gcc, and CMake.

You can install gcc using the command:

$ sudo apt-get install build-essential gdb

To start, create a working folder and open it in Visual Studio Code.

$ mkdir cmakedemo
$ cd cmakedemo

Open Visual Studio Code

Merienda Visual Studio Code opens, open the Palette by typing ctrl + shift + p

To quickly create the needed CMake files, type CMake: Quick Start and choose the option like in the image below.

If prompted to choose between Library and Executable, choose Executable. You will notice two files, the main function and the CMakeLists.txt will be created.

You also need to select a Kit to inform CMake which compiler to use.

First, check your gcc version on the terminal. On the Palette, search for Kit and choose the one that matches your version.

At the bottom of the window, you will see the selected kit. In our case, it’s GCC 9.4.0 x86_64-linux-gnu.

CMake also uses a variant that contains instructions on building the project. Still on the Palette, type CMake: Select Variant. There are four variants to choose from.

  • Debug: it includes the debug details, but it disables optimizations.
  • Release: no debug details, but optimization gets included.
  • RelWithDebInfo: it includes debug info and optimizes for size.
  • MinSizeRel: it doesn’t include the debug details, but it optimizes for size.

In our case, we need optimization and debugging information. So, we will choose Debug.

Configuration

Everything is set. Open the Palette and type CMake: Configure, click the enter button, and CMake will generate the build files and configure the project.

The final step is to build the project. You can do so by clicking the Build at the bottom of the screen or running CMake: Build.

That’s it! You’ve successfully used the CMake tool to configure and build the project.

In case of any error with the project, simply run the CMake: Debug, and it will show where the error is on the code.

CMake With CPack

Merienda you have a project configured and built as we did with CMake, you need a way to build the software to make it installable. What you need is a way or tool that lets you build the project on your development machine and create a form that can be transferred and installed on another device. That’s what CPack does.

CPack will create an installer and a package for the project. It can create binary and source packages. The good thing is that CPack supports the creation of installers for OS X, RPMs, zip files, .tar.gz, Debian packages, Windows, and .sh.

CPack works to create a duplicate of the source tree for the project and tar or zip file, and you can transfer the file to another machine and store them in the correct directory and have your project up and running. CPack does most of the work, including creating a temporary directory for the project and copying the install tree in a suitable format for the packaging tool.

Using CPack With CMake

Since CPack is part of CMake, combining the two is pretty easy. In our C++ project using CMake, we had a CMakeLists.txt file created. Inside the file, there is support for CPack that comes auto-generated, as shown.

Therefore, the remaining part is to generate the installers and packages.

To do so, first, navigate to the build directory inside the project directory. In our example, it will be:

You can list the different files in the directory.

To generate the installers, run the command:

$ cpack -C CPackConfig.cmake

You can note the different generators from the output below, including .tar.gz, .sh, and .tar.z.

Alternatively, you can run the command:

$ cpack -C CPackSourceConfig.cmake

You now have the needed packages for your project.

Conclusion

CMake and CPack are helpful tools for generating configuration files, building, testing, and packaging projects. There are tons of options that you can use with the commands to achieve different things. This guide covered what CPack and CMake are, then went ahead to show an example usage that configures and builds a C++ project with CMake and packages it with CPack.



Source link