The State of Open Source Security Highlights Many Organizations Lacking Strategies to Address Application Vulnerabilities Arising from Code Reuse

BOSTON — June 21, 2022 — Snyk, the leader in developer security, and The Linux Foundation, a completo nonprofit organization enabling innovation through open source, today announced the results of their first joint research report, The State of Open Source Security.

The results detail the significant security risks resulting from the widespread use of open source software within modern application development as well as how many organizations are currently ill-prepared to effectively manage these risks. Specifically, the report found:

  • Over four out of every ten (41%) organizations don’t have high confidence in their open source software security;
  • The media application development project has 49 vulnerabilities and 80 direct dependencies (open source code called by a project); and,
  • The time it takes to fix vulnerabilities in open source projects has steadily increased, more than doubling from 49 days in 2018 to 110 days in 2021.

“Software developers today have their own supply chains – instead of assembling car parts,  they are assembling code by patching together existing open source components with their unique code. While this leads to increased productivity and innovation, it has also created significant security concerns,” said Matt Jarvis, Director, Developer Relations, Snyk. “This first-of-its-kind report found widespread evidence suggesting industry naivete about the state of open source security today. Together with The Linux Foundation, we plan to leverage these findings to further educate and equip the world’s developers, empowering them to continue building fast, while also staying secure.”

“While open source software undoubtedly makes developers more efficient and accelerates innovation, the way modern applications are assembled also makes them more challenging to secure,” said Brian Behlendorf, Normal Manager, Open Source Security Foundation (OpenSSF). “This research clearly shows the risk is positivo, and the industry must work even more closely together in order to move away from poor open source or software supply chain security practices.” (You can read the OpenSSF’s blog post about the report here)

Snyk and The Linux Foundation will be discussing the report’s full findings as well as recommended actions to improve the security of open source software development during a number of upcoming events:

41% of Organizations Don’t Have High Confidence in Open Source Software Security

Modern application development teams are leveraging code from all sorts of places. They reuse code from other applications they’ve built and search code repositories to find open source components that provide the functionality they need. The use of open source requires a new way of thinking about developer security that many organizations have not yet adopted.

Further consider:

  • Less than half (49%) of organizations have a security policy for OSS development or usage (and this number is a mere 27% for medium-to-large companies); and,
  • Three in ten (30%) organizations without an open source security policy openly recognize that no one on their team is currently directly addressing open source security.

Media Application Development Project: 49 Vulnerabilities Spanning 80 Direct Dependencies

When developers incorporate an open source component in their applications, they immediately become dependent on that component and are at risk if that component contains vulnerabilities. The report shows how positivo this risk is, with dozens of vulnerabilities discovered across many direct dependencies in each application evaluated.

This risk is also compounded by indirect, or transitive, dependencies, which are the dependencies of your dependencies. Many developers do not even know about these dependencies, making them even more challenging to track and secure.

That said, to some degree, survey respondents are aware of the security complexities created by open source in the software supply chain today:

  • Over one-quarter of survey respondents noted they are concerned about the security impact of their direct dependencies;
  • Only 18% of respondents said they are confident of the controls they have in place for their transitive dependencies; and,
  • Forty percent of all vulnerabilities were found in transitive dependencies.

Time to Fix: More Than Doubled from 49 Days in 2018 to 110 Days in 2021

As application development has increased in complexity, the security challenges faced by development teams have also become increasingly complex. While this makes development more efficient, the use of open source software adds to the remediation burden. The report found that fixing vulnerabilities in open source projects takes almost 20% longer (18.75%) than in proprietary projects.

About The Report

The State of Open Source Security is a partnership between Snyk and The Linux Foundation, with support from OpenSSF, the Cloud Native Security Foundation, the Continuous Delivery Foundation and the Decliver Foundation. The report is based on a survey of over 550 respondents in the first quarter of 2022 as well as data from Snyk Open Source, which has scanned more than 1.3B open source projects.

About Snyk

Snyk is the leader in developer security. We empower the world’s developers to build secure applications and equip security teams to meet the demands of the digital world. Our developer-first approach ensures organizations can secure all of the critical components of their applications from code to cloud, leading to increased developer productivity, revenue growth, customer satisfaction, cost savings and an overall improved security posture. Snyk’s Developer Security Platform automatically integrates with a developer’s workflow and is purpose-built for security teams to collaborate with their development teams. Snyk is used by 1,500+ customers worldwide today, including industry leaders such as Asurion, Google, Intuit, MongoDB, New Relic, Revolut, and Salesforce.

About The Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.



Source link


SAN FRANCISCO—June 21, 2022—  Project Nephio, an open source initiative of partners across the telecommunications industry working towards true cloud-native automation , today announced rapid community growth and momentum.  

Since launching in April 2022 in partnership with Google Cloud, support has grown with 28 new organizations now part of the project (with over 50 contributing organizations), progress towards Technical Steering Committee (TSC) formation, and an upcoming Nephio Technical Summit, June 22-23, in Sunnyvale, Calif. New supporters include: A5G Networks, Alicon Sweden, Amdocs, ARGELA, CapGemini Technology, CIMI Corporation, Cohere Technologies, Coredge.io, CPQD, Deutsche Telekom, HPE, Keysight Technologies, KT, Kubermatic, Kydea, MantisNet, Matrixx, Minsait, Nabstract, Prodapt, Sandvine, SigScale, Spirent Communications, Telefónica, Tata Elxsi, TechMahidra, Verizon, Vodafone, Wind River, and Wipro. 

Nephio’s goal is to deliver carrier-grade, simple, open, Kubernetes-based cloud-native intent automation and common automation templates that materially simplify the deployment and management of multi-vendor cloud infrastructure and network functions across large scale edge deployments. Nephio enables faster onboarding of network functions to production including provisioning of underlying cloud infrastructure with a true cloud native approach, and reduces costs of adoption of cloud and network infrastructure.

“We are pleased to see Nephio experience such rapid growth in such a short time,” said Arpit Joshipura, normal manager, Networking, Edge, and IoT, the Linux Foundation. “This is testament to the market need for open, collaborative initiatives that simplify network functions and cloud infrastructure across edge deployments.”

“We are heartened by the robust engagement from our growing Nephio community, and look forward to continuing to work together to set a new open standard for cloud-native networks to advance automation, network function deployment, and the management of user journeys,” said Gabriele Di Piazza, Senior Director, Telecom Product Management, Google Cloud.

Developer collaboration is underway with the Technical Steering Committee formation in progress. And the Nephio technical community will gather in-person and virtually for the first Nephio Technical Summit, June 22-23 in Sunnyvale, Calif. The goal is to discuss strategy, technology enhancements, roadmap, and operational aspects of cloud native automation in the Telecommunication world. More details, including how to register, are available here: https://nephio.org/events/

More information about Nephio is available at www.nephio.org

Support from contributing organizations

A5G Networks

“A5G Networks is a leader and innovator in autonomous and distributed mobile core network software over hybrid and multi-cloud. Our unique IP helps realize significant savings in caudal and operating expenditures, reduces energy requirements, improves quality of user experience and catalyze adoption of new business models. A5G Networks is excited to join the Nephio initiative for intent based automation and unlock the true potential of 5G networks,” said Kaitki Agarwal, founder, president and CTO of A5G Networks, Inc.

Amdocs

“Amdocs is excited to join the Nephio community and accelerate the Telecom industry’s journey towards a cloud-native, Kubernetes-based, automation and orchestration solutions. As a leader in telco automation and a founding member of Linux  Foundation’s ONAP and EMCO projects, Amdocs is thrilled to join this new community that will address the challenges coming with the era of 5G, edge and ORAN,” said  Eyal Shaked, Caudillo Manager, Open Network PBU, Amdocs. 

Capgemini

“Capgemini is excited to join the Nephio community and join the Nephio working groups to facilitate the deployments of telecom operators by moving the Telecom industries towards a cloud-native platform and provide the automation and orchestration solutions with the help of Nephio. Capgemini is an expert in O-RAN standards and has FAPI compliant O-CU and O-DU implementations. Capgemini is thrilled to join this new community that will address the challenges coming with the era of 5G, edge and ORAN,” said Sandip Sarkar, senior director, CTO Organization, Capgemini.

CIMI Corporation

“The Nephio project promises to provide an open-source implementation of network operator service lifecycle automation based on the cloud-standard Kubernetes orchestration platform.  That’s absolutely critical for the convergence of network and cloud software,” said Tom Nolle, president, CIMI Corporation. 

Coreedge.io

Arif Khan, CEO, Coredge.iom said, “Bringing agility is delivering services and centrally managing the geographically distributed cloud, keeping cost in control is the key focus right now for operators. Nephio project is meant to achieve this with Kubernetes-based cloud-native intent automation and automation templates. We are glad to contribute to Nephio with our learnings in management of multi-cloud and distributed edge using intent driven automation inside the Coredge.”

Deutsche Telekom

“Large-scale automation is pivotal on our Software Telco journey. It is important that we work together as an industry on standards that will enable and simplify the cloud native automation of network functions. And we believe the Nephio project can play a fundamental role to speed up this process,” said Jochen Appel, VP Network Automation, Deutsche Telekom.

KT

“Cloud native is a next step on the journey of telcos’ path to successful digital transformation. Also the automated management to enable multi-vendor support and reduce cost by efficiency and agility is a key negociador for operation of the cloud based network systems. The project Nephio will help open, wide, and easy adoption of such infrastructure. By co-working with partners in the project, we look forward to solving the interworking issues among multi-vendors and building up the efficient and agile orchestrated management system easily,” said Jongsik Lee, senior vice president, head of Infrastructure DX R&D Center, KT.

MantisNet

“MantisNet supports the Nephio initiative, specifically realizing the vision of autonomous networks. The Nephio project is complementary with the kinds of full-stack, end-to-end, programmable visibility, powered by an open, standards-based, event-driven, composable architecture that we are developing for a broad range of new and emerging use-cases to help ensure the secure and reliable operation of cloud-native 5G applications,”said  Peter Dougherty, CEO MantisNet. 

Matrixx Software

“Continued advancements in the automation of distributed Cloud Native Network Functions will be critical to delivering on the promises of new differentiated 5G services, and key to new industry revenue models,” said Marc Price, CTO, Matrixx Software. 

Minsait

“As a company helping Telcos to onboard their 5G network functions, we are aware of the current challenges they are facing. Nephio is a key initiative to fulfill the promises of truly cloud native deployment and operation that specifically addresses the unique pain points  of the Telco industry,” said Francisco Rodríguez, head of network virtualization at Minsait. 

Nabstract.io

“Harmonization and availability of common practices that facilitate intent driven automation for deployment and management of infrastructure and cloud native Network Functions will boost the consumption of 5G connectivity capabilities across market verticals through abstracted open APIs,” said Vaibhav Mehta, Founder, Nabstract.io.

Proadapt

“Prodapt is the leading SI for connectedness industry with a laser focus on software intensive networks. Together as a key contributor to the Project Nephios, we will jointly accelerate TelCo’s journey towards becoming a TechCo by co-innovating, -building, -deploying, and -operating distributed multi-cloud network functions. We believe our collaboration would set the foundation of a fully automated intent driven cloud-native networks supporting differentiated 5G & distributed edge experience,” said Rajiv Papneja, SVP & general head, Cloud & Network Services, Prodapt.

Sandvine

“Sandvine Application and Network Intelligence solutions provide machine learning-based 5G analytics over hybrid cloud, multicloud, and edge deployments, empowering service-providers and enterprise customers to analyze, optimize, and monetize application experiences. Sandvine is proud to be a part of the Nephio initiative for intent-based automation, a prelude to Network-as-a-Service offerings that will scale autonomously, even when comprised of different vendors’ Infrastructure/Platform/Software-aaS components,” said Samir Marwaha, Chief Strategy Officer, Sandvine.

SigScale

“SigScale believes Nephio could be instrumental in achieving a management continuum across multi-cloud, multi-vendor networks,” said Vance Shipley, CEO, SigScale.

Vodafone

“Building, deploying, and operating Telco workloads across distributed cloud environments is complex, so it is important to adopt cloud native best practices as we evolve, to enable us to achieve our goals for agility, automation, and optimisation,” said Tom Kivlin, principal Cloud Architect, Vodafone. “Project Nephio presents a great opportunity to drive the cloud native orchestration of our networks.  We look forward to working with our partners and the Nephio community to further develop and accelerate the simplification of network function orchestration.” 

Wind River

“As active supporters and contributors of key telco cloud-native open source projects such as StarlingX and the O-RAN Alliance, Wind River is excited to join Nephio. Nephio’s mission of simplifying the deployment and management of multi-vendor cloud infrastructure across large scale deployments is directly aligned with our strategy,” said Gil Hellmann, vice president, Telecom Solutions Engineering, Wind River. 

About Nephio

More information can be found at www.nephio.org.

About the Linux Foundation

The Linux Foundation is the organization of choice for the world’s top developers and companies to build ecosystems that accelerate open technology development and commercial adoption. Together with the worldwide open source community, it is solving the hardest technology problems by creating the largest shared technology investment in history. Founded in 2000, The Linux Foundation today provides tools, training and events to scale any open source project, which together deliver an economic impact not achievable by any one company. More information can be found at www.linuxfoundation.org.

#####



Source link


The TODO Group, together with Linux Foundation Research, LF Training & Certification, api7.ai, Futurewei, Ovio, Salesforce, VMware, and X-Labs, is conducting a survey as part of a research project on the prevalence and outcomes of open source programs among different organizations across the globe. 

Open source program offices (OSPOs) help set open source strategies and improve an organization’s software development practices. Since 2018, the TODO Group has conducted surveys to assess the state of open source programs across the industry. Today, we are pleased to announce the launch of the 2022 edition featuring additional questions to add value to the community.

“The TODO Group was created to foster vendor-neutral best practices in open source usage and OSPO cultivation. Our annual OSPO survey is one of the best tools we have to understand how open source programs and initiatives are run at organizations worldwide, and to gain insight to inform existing and potential OSPO leaders of the nuances of fostering professional open source programs.”

Chris Aniszczyk, co-founder TODO Group and CTO, CNCF

“Thanks in part to the great community contributions received this year from open source folks engaged in OSPO-related topics, the OSPO 2022 Survey goes a step further to get insights and inform based on the most coetáneo OSPO needs across regions.”

Ana Jimenez Santamaria, OSPO Program Manager, TODO Group

The survey will generate insights into the following areas, including:

  • The extent of adoption of open source programs and initiatives 
  • Concerns around the hiring of open source developers 
  • Perceived benefits and challenges of open source programs
  • The impact of open source on organizational strategy

The survey will be available in English, Chinese, and Japanese. Please participate now; we intend to close the survey in mid-July. Privacy and confidentiality are important to us. Neither participant names, nor their company names, will be published in the final results.

To take the 2022 OSPO Survey, click the button below:



Source link


Data Processing and Infrastructure Processing Units – DPU and IPU – are changing the way enterprises deploy and manage compute resources across their networks; OPI will nurture an ecosystem to enable easy adoption of these innovative technologies 

SAN FRANCISCO, Calif.,  – June 21, 2022 The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the new Open Programmable Infrastructure (OPI) Project. OPI will foster a community-driven, standards-based open ecosystem for next-generation architectures and frameworks based on DPU and IPU technologies. OPI is designed to facilitate the simplification of network, storage and security APIs within applications to enable more portable and performant applications in the cloud and datacenter across DevOps, SecOps and NetOps. 

Founding members of OPI include Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA and Red Hat with a growing number of contributors representing a broad range of leading companies in their fields ranging from silicon and device manufactures, ISVs, test and measurement partners, OEMs to end users. 

“When new technologies emerge, there is so much opportunity for both technical and business innovation but barriers often include a lack of open standards and a thriving community to support them,” said Mike Dolan, senior vice president of Projects at the Linux Foundation. “DPUs and IPUs are great examples of some of the most promising technologies emerging today for cloud and datacenter, and OPI is poised to accelerate adoption and opportunity by supporting an ecosystem for DPU and IPU technologies.

DPUs and IPUs are increasingly being used to support high-speed network capabilities and packet processing for applications like 5G, AI/ML, Web3, crypto and more because of their flexibility in managing resources across networking, compute, security and storage domains. Instead of the servers being the infrastructure unit for cloud, edge or the data center, operators can now create pools of disaggregated networking, compute and storage resources supported by DPUs, IPUs, GPUs, and CPUs to meet their customers’ application workloads and scaling requirements.

OPI will help establish and nurture an open and creative software ecosystem for DPU and IPU-based infrastructures. As more DPUs and IPUs are offered by various vendors, the OPI Project seeks to help define the architecture and frameworks for the DPU and IPU software stacks that can be applied to any vendor’s hardware offerings. The OPI Project also aims to foster a rich open source application ecosystem, leveraging existing open source projects, such as DPDK, SPDK, OvS, P4, etc., as appropriate.  The project intends to:

  • Define DPU and IPU, 
  • Delineate vendor-agnostic frameworks and architectures for DPU- and IPU-based software stacks applicable to any hardware solutions, 
  • Enable the creation of a rich open source application ecosystem,
  • Integrate with existing open source projects aligned to the same vision such as the Linux kernel, and, 
  • Create new APIs for interaction with, and between, the elements of the DPU and IPU ecosystem, including hardware, hosted applications, host node, and the remote provisioning and orchestration of software

With several working groups already active, the initial technology contributions will come in the form of the Infrastructure Programmer Development Kit (IPDK) that is now an official sub-project of OPI governed by the Linux Foundation. IPDK is an open source framework of drivers and APIs for infrastructure offload and management that runs on a CPU, IPU, DPU or switch. 

In addition, NVIDIA DOCA , an open source software development framework for NVIDIA’s BlueField DPU, will be contributed to OPI to help developers create applications that can be offloaded, accelerated, and isolated across DPUs, IPUs, and other hardware platforms. 

For more information visit: https://opiproject.org; start contributing here: https://github.com/opiproject/opi.

Founding Member Comments

Geng Lin, EVP and Chief Technology Officer, F5

“The emerging DPU market is a golden opportunity to reimagine how infrastructure services can be deployed and managed. With collective collaboration across many vendors representing both the silicon devices and the entire DPU software stack, an ecosystem is emerging that will provide a low friction customer experience and achieve portability of services across a DPU enabled infrastructure layer of next generation data centers, private clouds, and edge deployments.”

Patricia Kummrow, CVP and GM, Ethernet Products Group, Intel

Intel is committed to open software to advance collaborative and competitive ecosystems and is pleased to be a founding member of the Open Programmable Infrastructure project, as well as fully supportive of the Infrastructure Processor Development Kit (IPDK) as part of OPI. We look forward to advancing these tools, with the Linux Foundation, fulfilling the need for a programmable infrastructure across cloud, data center, communication and enterprise industries making it easier for developers to accelerate innovation and advance technological developments.

Ram Periakaruppan, VP and Común Manager, Network Test and Security Solutions Group, Keysight Technologies 

“Programmable infrastructure built with DPUs/IPUs enables significant innovation for networking, security, storage and other areas in disaggregated cloud environments. As a founding member of the Open Programmable Infrastructure Project, we are committed to providing our test and validation expertise as we collaboratively develop and foster a standards-based open ecosystem that furthers infrastructure development, enabling cloud providers to maximize their investment.”

Cary Ussery, Vice President, Software and Support, Processors, Marvell

Data center operators across multiple industry segments are increasingly incorporating DPUs as an integral part of their infrastructure processing to offload complex workloads from caudillo purpose to more robust compute platforms. Marvell strongly believes that software standardization in the ecosystem will significantly contribute to the success of workload acceleration solutions. As a founding member of the OPI Project, Marvell aims to address the need for standardization of software frameworks used in provisioning, lifecycle management, orchestration, virtualization and deployment of workloads.

Kevin Deierling, vice president of Networking at NVIDIA 

“The fundamental architecture of data centers is evolving to meet the demands of private and hyperscale clouds and AI, which require extreme performance enabled by DPUs such as the NVIDIA BlueField and open frameworks such as NVIDIA DOCA. These will support OPI to provide BlueField users with extreme acceleration, enabled by common, multi-vendor management and applications. NVIDIA is a founding member of the Linux Foundation’s Open Programmable Infrastructure Project to continue pushing the boundaries of networking performance and accelerated data center infrastructure while championing open standards and ecosystems.”

Erin Boyd, director of emerging technologies, Red Hat

“As a founding member of the Open Programmable Infrastructure project, Red Hat is committed to helping promote, grow and collaborate on the emergent advantage that new hardware stacks can bring to the cloud-native community, and we believe that the formalization of OPI into the Linux Foundation is an important step toward achieving this in an open and transparent fashion. Establishing an open standards-based ecosystem will enable us to create fully programmable infrastructure, opening up new possibilities for better performance, consumption, and the ability to more easily manage unique hardware at scale.”

About the Linux Foundation

Founded in 2000, the Linux Foundation and its projects are supported by more than 1,800 members and is the world’s leading home for collaboration on open source software, open standards, open data, and open hardware. Linux Foundation’s projects are critical to the world’s infrastructure including Linux, Kubernetes, Node.js, Hyperledger, RISC-V, and more.  The Linux Foundation’s methodology focuses on leveraging best practices and addressing the needs of contributors, users and solution providers to create sustainable models for open collaboration. For more information, please visit us at linuxfoundation.org.

###

 

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see its trademark usage page: www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds. Red Hat is a registered trademark of Red Hat, Inc. or its subsidiaries in the U.S. and other countries.

Marvell Disclaimer: This press release contains forward-looking statements within the meaning of the federal securities laws that involve risks and uncertainties. Forward-looking statements include, without limitation, any statement that may predict, forecast, indicate or imply future events or achievements. Presente events or results may differ materially from those contemplated in this press release. Forward-looking statements speak only as of the date they are made. Readers are cautioned not to put undue reliance on forward-looking statements, and no person assumes any obligation to update or revise any such forward-looking statements, whether as a result of new information, future events or otherwise.

Media Contact
Carolyn Lehman
The Linux Foundation
clehman@linuxfoundation.org



Source link


PostgreSQL is an open-source object-oriented based database system. It is a powerful database system that supports both relational and non-relational data types. The Boolean data type is a commonly used data type that can accept three types of values: True, False, and NULL. The short form of this data type is bool and one byte is used to store the Boolean data. The True value of the Boolean data can also be denoted by ‘yes’, ‘y’, ‘true’, and 1. The False value of the Boolean data can also be denoted by ‘no’, ‘n’, ‘false’, and 0.

Pre-requisites:

You have to install the latest version of PostgreSQL packages on the Linux operating system before executing the SQL statements shown in this tutorial. Run the following commands to install and start the PostgreSQL:

$ sudo apt-get -y install postgresql postgresql-contrib

$ sudo systemctl start postgresql.service

Run the following command to login to PostgreSQL with root permission:

Use of Boolean data type in PostgreSQL tables:

Before creating any table with the Boolean data type, you have to create a PostgreSQL database. So, run the following command to create a database named ‘testdb’:

# CREATE DATABASE testdb;

The following output will appear after creating the database:

Example-1: Create a table using the Boolean data type

Create a table named ‘technicians’ in the current database with three fields. The first field name is tech_id, the data type is an integer and it is the primary key of the table. The second field name is name and the data type is character. The third field name is available and the data type is Boolean.

# CREATE TABLE technicians (

    tech_id INT NOT NULL PRIMARY KEY,
    name CHARACTER(10) NOT NULL,
    available BOOLEAN NOT NULL

);

The following output will appear if the table is created successfully:

Run the following command to INSERT a record into the technicians table where ‘true’ is used for the Boolean value:

# INSERT INTO technicians VALUES (1, ‘Esquivar Ali’, ‘true’);

The following output will appear after executing the above insert query:

Run the following INSERT command to insert a record into the technicians table where 0 is used for the Boolean value. The 0 is not acceptable for the Boolean value in PostgreSQL. So, an error message will appear.

# INSERT INTO technicians VALUES (2, ‘Kabir Hossain’, 0);

The following output will appear after executing the above insert query. The output shows an error message that indicates that the type of the 0 is Integer, not Boolean.

Run the following INSERT command to insert a record into the technicians table where ‘false’ is used for the Boolean value.

# INSERT INTO technicians VALUES (3, ‘Abir Hasan’, ‘false’);

The following output will appear after executing the above insert query:

Run the following INSERT command to insert a record into the technicians table where ‘t’ is used for the Boolean value:

# INSERT INTO technicians VALUES (5, ‘Rebeka Ali’, ‘t’);

The following output will appear after executing the above insert query:

Example-2: Create a table using Bool data type

Create a table named ‘products’ in the current database with three fields. The first field name is id, the data type is an integer and it is the primary key of the table. The second field name is name and the data type is character. The third field name is physical_product, the data type is BOOL, and the default value of the field is ‘true’.

# CREATE TABLE products (

    id INT NOT NULL PRIMARY KEY,
    name CHARACTER(10) NOT NULL,
    physical_product BOOL NOT NULL DEFAULT ‘true’

);

The following output will appear if the table is created successfully:

Run the following command to insert a record into the products table where ‘f’ is used for the BOOL value:

# INSERT INTO products VALUES (1, ‘Antivirus’, ‘f’)

Run the following INSERT command to insert a record into the products table where no value is provided for the BOOL data. Here, the default value of the field will be inserted.

# INSERT INTO products VALUES (2, ‘Celador’);

The following output will appear after executing the above two insert queries:

Check the content of the tables:

Run the SELECT following select query to retrieve all records from the technicians table:

# SELECT * FROM technicians;

Run the SELECT following select query to retrieve all records from the technicians table where the value of the available field is ‘false’:

# SELECT * FROM technicians WHERE available = ‘false’;

Run the SELECT following select query to retrieve all records from the technicians table where the value of the available field is ‘t’ or ‘true’:

# SELECT * FROM technicians WHERE available = ‘t’ OR available = ‘true’;

The following output will appear after executing the above three ‘select’ queries. The output of the first query shows all records of the table. The output of the second query shows those records of the table where the value of the available field is ‘f’. The output of the third query shows those records of the table where the value of the available field is ‘t’.

Run the following select query to retrieve all records from the products table:

# SELECT * FROM products;

Run the following select query to retrieve all records from the products table where the value of the physical_product field is ‘True’:

# SELECT * FROM products WHERE physical_product = ‘True’;

The following output will appear after executing the above two ‘select’ queries. The output of the first query shows all records of the table. The output of the second query shows those records of the table where the value of the available field is ‘t’.

Conclusion:

Different uses of Boolean or BOOL data types in PostgreSQL tables have been shown in this tutorial by using multiple examples to clarify the purpose of using Boolean data types in the table of the PostgreSQL database.



Source link


The Enumerated or ENUM data type is used to select one value from the list of multiple values. The particular value will be selected from the drop-down list for the ENUM data type. The ENUM values are static, unique, and case-sensitive. So, the users have to select any value from the ENUM values. The input value that does not match with any ENUM value can’t be inserted into the ENUM field. This data type takes 4 bytes to store in the table. The ENUM data type is useful for storing those types of data that are not required to change in the future. It helps to insert valid data only. The uses of ENUM data type in PostgreSQL have been shown in this tutorial.

Pre-requisites:

You have to install the latest version of PostgreSQL packages on the Linux operating system before executing the SQL statements shown in this tutorial. Run the following commands to install and start the PostgreSQL:

$ sudo apt-get -y install postgresql postgresql-contrib

$ sudo systemctl start postgresql.service

Run the following command to login to PostgreSQL with root permission:

Uses of ENUM data type:

Before creating any table with the Boolean data type, you have to create a PostgreSQL database. So, run the following command to create a database named ‘testdb’:

# CREATE DATABASE testdb;

The following output will appear after creating the database:

Create and read the ENUM type:

Run the following CREATE command to create an ENUM type named account_status with three values:

# CREATE TYPE account_status AS enum(‘Pending’, ‘Inactive’, ‘Active’);

Run the following SELECT command to print the values of ENUM type that has been created before:

# SELECT UNNEST(enum_range(NULL:: account_status)) AS account_status;

The following output will appear after executing the above commands:

Rename the ENUM Type:

Run the following command to change the name of the ENUM type from ‘account_status’ to ‘status’:

# ALTER TYPE account_status RENAME TO STATUS;

Create a table using the ENUM data type:

Create a table named ‘account’ in the current database with three fields. The first field name is the username that is the primary key of the. The second field name is the name and the data type is VARCHAR (30). The third field name is address and the data type is TEXT. The fourth field name is email and the data type is VARCHAR (50). The fifth field name is a_status and the data type is ENUM that has been created earlier.

# CREATE TABLE account (

   username VARCHAR (20) PRIMARY KEY,
   name   VARCHAR (30),
   address   TEXT,
   email   VARCHAR (50),
   a_status   STATUS );

The following output will appear after executing the above command:

Insert data into the table:

Run the following INSERT query to insert three records into the account table. All values of the ENUM field are valid here:

# INSERT INTO account (username, name, address, email, a_status)

   VALUES
   (‘farhad1278’, ‘Farhad Hossain’, ‘123/7, Dhanmondi Dhaka.’, [email protected], ‘Active’),
   (‘nira8956’, ‘Nira Akter’, ’10/A, Jigatola Dhaka.’, [email protected], ‘Inactive’),
   (‘jafar90’, ‘Jafar  Iqbal’, ‘564, Mirpur Dhaka.’, [email protected], ‘Pending’);

The following output will appear after executing the above query:

Run the following INSERT query to insert a record into the account table but the value given for the ENUM field does not exist in the ENUM type:

# INSERT INTO account (username, name, address, email, a_status)

   VALUES
   (‘rifad76’, ‘Rifad Hasan’, ’89, Gabtoli Dhaka.’, [email protected], ‘Blocked’);

The following output will appear after executing the above query. The error has occurred in the output for giving an ENUM value that does not exist in the ENUM type.

Run the following SELECT command to read all records from the account table:

Run the following SELECT command to read those records of the account table that contain the ‘Active’ or ‘Pending’ value in the ENUM field:

# SELECT * FROM account WHERE a_status=‘Active’ OR a_status=‘Pending’;

The following output will appear after executing the above SELECT queries:

Change the ENUM value:

If any existing value of the ENUM type is changed then the ENUM field value of the table where that ENUM has been used will be changed also.

Run the following ALTER command to change ENUM value ‘Active’ to ‘Online’:

# ALTER TYPE STATUS RENAME VALUE ‘Active’ TO ‘Online’;

Run the following SELECT command to check the records of the account table after changing the ENUM value:

The following output will appear after executing the above commands. There was one record in the table that contains the ENUM value, ‘Active’. The output shows that the ‘Active’ value has been changed to ‘Online’ after changing the ENUM value.

Add new value to an existing ENUM data type:

Run the following ALTER command to add a new item into the ENUM type named status:

# ALTER TYPE STATUS ADD VALUE ‘Blocked’;

Run the following SELECT query that will print the list of ENUM types after adding the new value:

# SELECT UNNEST(enum_range(NULL:: STATUS)) AS account_status;

The following output will appear after executing the above query:

A new value can be inserted before or after the particular value of an existing ENUM type. Run the first ALTER command to add the new value, ‘Blocked’ before the value ‘Inactive’. Run the second ALTER command to add the new value, ‘Blocked’ after the value ‘Inactive’.

# ALTER TYPE STATUS ADD VALUE ‘ Blocked’ BEFORE ‘Inactive’;

# ALTER TYPE STATUS ADD VALUE ‘ Blocked’ AFTER ‘Inactive’;

Delete ENUM data type:

You have to delete the table where the ENUM type is used before removing the ENUM type.  Run the following command to remove the table:

Run the following command to remove the ENUM type after removing the table:

Conclusion:

The ways to create, update, and delete ENUM data types in PostgreSQL and the uses of ENUM data types in the PostgreSQL table have been shown in this tutorial that will help the new PostgreSQL users to know the purpose of using ENUM data types properly.



Source link


Constraints are said to be limitations or restrictions that are applied to specific columns of a table. Sometimes, it refers to the extra privileges that are assigned to the particular columns. One of that constraints is NOT NULL constraints in SQL database. Whichever column has been specified with the NOT NULL constraint, that column cannot be left without a value. Thus, we have decided to cover the use of the NOT NULL constraint within the SQLite database while implementing this article on Ubuntu 20.04. Before going to the illustration of using the NOT NULL constraint in the SQLite database, we have to open the Ubuntu terminal via the Ctrl+Alt+T instruction and update and upgrade our system using the shown-below instruction.

Make sure to have SQLite C-library of SQL already installed on your Linux system. After that, you need to launch it within the shell terminal with the use of the keyword “sqlite3”. The static shell will be opened within the shell of Ubuntu 20.04 for the SQLite database.

Let’s list all the tables that are found in the SQLite database. Thus, we will be trying the “.tables” instruction to do so. The Sqlite3 database doesn’t contain any database yet (i.e. according to the “.tables” instruction.)

The constraints can only be applied to the columns of a table. If we don’t have any table, then we don’t have any columns. Thus no constraints. Therefore, we have to create a table in the database on which we can apply the NOT NULL constraint. So, the database allows us to use the CREATE TABLE instruction to create one table with the name “Test”. This table will contain a total of 2 columns “ID” and “Name”. The column ID will be of integer type and will be used as a primary key for the table. The “Name” column will be of text type and must not be Null as per the use of the NOT NULL constraint specified at the time of creating a table. Now, we have a new table “Test” in the database as per the “.tables” instruction.

The use of SELECT instruction to fetch the records of a Test table is showing that the table is empty right now. So, we need to add some records to it first.

We will be using the SQL’s INSERT INTO instruction followed by the name of a table and its column to insert the data records within its columns. You need to add records after the “VALUES” keyword followed by the simple brackets holding a total of 10 records. No record has been specified NULL for the column “Name” so far as presented below.

INSERT INTO Test(ID, Name) VALUES (1, «George»), (2, «Bella»), (3, «Paul»), (4, «Johny»),

(5, «Ketty»), (6, «Angelina»), (7, «Nina»), (8, «Dilraba»), (9, «Tom»), (10, «Tyalor»);

INSERT INTO Test(ID, Name) VALUES (11, «»), (12, «»);

INSERT INTO Test(ID, Name) VALUES (11), (12);

INSERT INTO Test(ID, Name) VALUES (13, NULL), (14, NULL);

After inserting the records into the Test table, we have tried the SELECT instruction to display all the data on our SQLite shell. It displayed 10 records for the ID and Name column.

Let’s see how the NOT NULL constraint reacts to the spaces and NULL keyword while inserting data into column “Name” of the Test table. So, we have used the empty value “” the place of the “Name” column within the VALUES part of the INSERT INTO instruction. The record has been successfully added to the table Test. After using the SELECT instruction on the shell, we have found that it has been displaying nothing for column “Name” at records 11 and 12 and taking space as a NOT NULL value.

If you try the INSERT INTO instruction with the use of column names that contain a constraint NOT Null and doesn’t add the value for a particular column, then it will throw an error: “1 values for 2 columns” as presented below. To remove this error, you need to put a value for the “Name” column and do not leave it empty.

Let’s put the NULL Keyword within the VALUES part of the INSERT INTO instruction to add the null records for the column “Name” that contains the NOT NULL constraint. Execution of this instruction with the NULL keyword is throwing an error “NOT NULL constraint failed: test.Name”. This means we cannot put NULL as a value to the column “Name” due to its NOT NULL constraint restriction.

Let’s take a look at another example. So, we have created a new table Actor with the three columns ID, Name, and Age via the CREATE TABLE instruction. None of the columns contains a NOT NULL constraint on it.

>> CREATE TABLE Actor(ID INT PRIMARY KEY, Name TEXT, Age INT);

Right now the table Actor is empty as per the SELECT “*” instruction below.

We have to put some records in the table “Actor” first with the INSERT INTO instruction. So, we have added 5 records for all three columns: ID, Name, and Age.

>> INSERT INTO Actor(ID, Name, Age) VALUES (1, «Julia», 49), (2, «Angelina», 49),

(3, «Leonardo», 50), (4, «Tom», 55);

We have tried the SELECT instruction to fetch all the newly added records of an “Actor” table. A total of 5 records have been displayed on our screen with no NULL values.

Let’s add a new record within the Actor table that contains a NULL value using the INSERT INTO instruction. So, we have tried the instruction for all the 3 columns and added a NULL value for the column “Age”. The record has been successfully inserted in the table and didn’t throw any error because we haven’t set any NOT NULL constraint for any of the columns of a table “Actor”. The use of SELECT instruction for the table Actor has been displaying all the first 5 added records and the 6th record with a NULL value at its column “Age”.

>> INSERT INTO Actor(ID, Name, Age) VALUES (1, «Ema Watson», NULL)
<blockquote>> SELECT * FROM Actor;

So, this was about the use of NOT NULL constraints for specific columns of SQLite tables. We have demonstrated how the database reacts to not putting the values within the columns with NOT NULL constraints and how we can use the NULL value with the column values.



Source link


Google Chrome is one of the most widely used browsers right now. Google Chrome serves as the go-to browser for both desktop and smartphone users with its wide variety of features, privacy protection, and a huge selection of add-ons to choose from.

Updates in the security features of Google Chrome have allowed it to mark connections to different websites as “Secure” or “Not secure”. You might have come across these warnings when you visit certain websites.

This guide will help you understand the error and what steps you should take to get around or fix it. By the end of this guide, you should be able to navigate yourself through a website safely even if it has a “Not secure” prompt for it.

HTTP vs HTTPS

It is necessary to understand the difference between HTTP and HTTPS to understand why you’re getting the “Not secure” prompt when you browse certain websites.

HTTP stands for HyperText Transfer Protocol. It is a protocol that establishes effective communication between a web server and a browser. It allows you to share media-based documents such as HTML.

Despite being the go-to protocol when it came to online communication, HTTP does not possess encryption methods, nor does it provide authentication methods. You’ll normally see the site is not secure warning when browsing a website using the HTTP protocol.

Most websites switched to HTTPS with the “S” in the name meaning secure. This version provides them with proper authentication methods along with encryption.

SSL Certificates

SSL certificates are another way your browser verifies the security of a website. These certificates serve as proof that the website you’re visiting is safe and probably uses HTTPS as the protocol.

SSL certificates can be obtained in different ways. Website owners can apply for SSL certificates online after verifying their site information and generating CSR (Certificate Signing Request) for their domain.

What Does It Mean If a Website Is “Not Secure”?

Browsing websites that is not secure can potentially be dangerous.

If a website does not have an SSL certificate or uses HTTP instead of HTTPS, it implies that the website doesn’t have any strong means of protecting your information. This means that any personal information that you give on these sites can be stolen pretty easily by hackers.

It should be noted, however, that “Not secure” doesn’t imply that the destination is affected by malicious malware. So, visiting the website won’t necessarily give you malware or virus on your computer.

Visiting these sites, however, means that you’re leaving your information prone to attacks, as any information you enter can be compromised easily.

How to Identify if a Website Is Secure on Chrome?

Thanks to Google Chrome, identifying these websites has never been easier. Chrome’s advanced security features allow it to automatically detect whether the websites or servers have a valid SSL certificate.

When you open a website in Chrome, it marks it as secure or not secure. This is represented by a “lock” icon in the search bar.

When a website is secure, you should see a closed lock icon as shown in the image below. Clicking on the lock will show you that the connection is secure.

When a website is not secure, you should see a quarantine icon with the text Not secure as shown in the image below. Clicking on the icon will present you with more details.

It is advised that you keep an eye out for these prompts as they’ll prevent you from providing personal information to potentially harmful websites.

What to Do if a Site Is Not Secure?

In case the website you’re visiting is not secure, here’s a list of things you should remember in case you have to use it.

  • Don’t conduct any personal transactions on these websites. Since these websites are not secure, providing your information to them will most likely result in your information being compromised.
  • Try using these websites as less as you can. Remember that even if you’re just viewing site information, you’re still very prone to attacks since your activity can easily be monitored.
  • In case you have to use these websites regularly, try contacting the site owners and asking them to switch over to HTTPS rather than HTTP.

Conclusion

We hope this guide helped you understand what to do when you’re prompted with a not secure option on Google Chrome. We covered some basics of HTTP and HTTPS, along with how to identify your connection as “Secure” or “Not secure” on Chrome and what you can do when browsing insecure sites. With this, we hope you have a safe browsing experience.



Source link


The binary data type is another useful data type of PostgreSQL to store binary string data. The sequence of bytes or octets is stored in the binary string. The zero-value octet and the non-printable octets can be stored in the field of the binary data type. The raw bytes are stored by the binary strings. The input value of the binary string can be taken by the ‘hex’ or ‘escape’ format and the format of the output depends on the configuration parameter, bytea_output. The default output format is ‘hex’. The BLOB or BINARY LARGE OBJECT is defined by the SQL standard as the binary string type. Different formats and the uses of binary data types in PostgreSQL have been shown in this tutorial.

Pre-requisites:

You have to install the latest version of PostgreSQL packages on the Linux operating system before executing the SQL statements shown in this tutorial. Run the following commands to install and start the PostgreSQL:

1
2
3

$ sudo apt-get -y install postgresql postgresql-contrib

$ sudo systemctl start postgresql.service

Run the following command to login to PostgreSQL with root permission:

Bytea Hex Format:

The binary data is encoded as two hexadecimal digits per byte in hex format. The binary string is preceded by the sequence, x. The hexadecimal digits can be either uppercase or lower case. This format is supported by a wide range of external applications.

Example:

1

# SELECT ExABC0110′ AS hex_format;

Bytea Escape Format:

The escape format is the traditional PostgreSQL format. A sequence of ASCII characters is used to represent the binary data in escape format. The binary string is converted into a three-digit octal value preceded by two backslashes.

Bytea Idéntico Escaped Octets:

Parte Value Description Escaped Input Example Output
0 Zero octet E’00′ SELECT E’00′::bytea; x00
45 Hyphen ‘-‘ or E’55’ SELECT E’-‘::bytea; x2d
110 ‘n’ ‘n’ or E’156′ SELECT E’n’::bytea; x6e
0 to 31 and 127 to 255 Non-printable octets E’xxx'(octal value) SELECT E’01′::bytea; x01

Bytea output Escaped Octets:

Parte Value Description Escaped Output Example Output
45 Hyphen SELECT E’55′::bytea;
32 to 126 Printable octets Any printable character SELECT E’156′::bytea; n
0 to 31 and 127 to 255 Non-printable octets xxx(octal value) SELECT E’01′::bytea; 01

Use of Binary data type in PostgreSQL:

Before creating any table with the Boolean data type, you have to create a PostgreSQL database. So, run the following command to create a database named ‘testdb’:

1

# CREATE DATABASE testdb;

The following output will appear after creating the database:

Example-1: Create a table with a binary data type to store octal value

Create a table named ‘tbl_binary_1’ in the current database with two fields. The first field name is id which is the primary key of the table. The value of this field will be incremented automatically when a new record will insert. The second field name is binary_data and the data type is BYTEA.

1
2
3
4

# CREATE TABLE tbl_binary_1 (

   Id SERIAL PRIMARY KEY,
   binary_data BYTEA);

The following output will appear after executing the above query:

Run the following INSERT query that will insert two octal values into the tbl_binary_1 table:

1
2
3
4
5

# INSERT INTO tbl_binary_1 (binary_data)

   VALUES
   (E055′),
   (E156′);

The following output will appear after executing the above query:

Run the following SELECT query that will read all records from the tbl_binary_1 table:

1

# SELECT * FROM tbl_binary_1;

The following output will appear after executing the above query. The output shows the hexadecimal value of the octal value.

Example-2: Create a table with a binary data type to store image data

Create a table named ‘tbl_binary_2’ in the current database with three fields. The first field name is id which is the primary key of the table and the value of this field will be incremented automatically when a new record will be inserted. The second field name is image_name and the data type is VARCHAR (20). The image name will be stored in this field. The third field name is image_data and the data type of this field is BYTEA. The image data will be stored in this field.

1
2
3
4
5

# CREATE TABLE tbl_binary_2 (

  Id SERIAL PRIMARY KEY,
  image_name VARCHAR(20),
  image_data BYTEA);

The following output will appear after executing the above query.

Insert an image in the table using PHP:

Create a PHP file named insert_image.php with the following code that will read the content of an image file. Then, store the image in the PostgreSQL table after converting it into binary data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49

<?php

//Display error setting

ini_set(‘display_errors’, 1);

error_reporting(E_ALL);

$host = «localhost»;

$user = «postgres»;

$pass = «12345»;

$db = «testdb»;

 

//Create database connection object

$db_connection = pg_connect(«host=$host dbname=$db user=$user password=$pass«)

    or die («Could not connect to servern«);

 

$filename = «flower.png»;

$image = fopen($filename, ‘r’) or die(«Unable to open the file.»);

$data = fread($image, filesize($filename));

$cdata = pg_escape_bytea($data);

fclose($image);

 

//Insert the image data

$query = «INSERT INTO tbl_binary_2(image_name, image_data) Values(‘$filename‘, ‘$cdata‘)»;

$result = pg_query($db_connection, $query);

if($result) echo «Image data is inserted successfully.»;

pg_close($db_connection);

?>

The following output will appear after executing the above script from the recinto server and the image file existing in the current location:

Read the image data from the table using PHP:

Create a PHP file named get_image.php with the following code that will read the binary data of an image file. Create the image from the binary data and display the image in the browser.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57

<?php

//Display error setting

ini_set(‘display_errors’, 1);

error_reporting(E_ALL);

$host = «localhost»;

$user = «postgres»;

$pass = «12345»;

$db = «testdb»;

 

//Create database connection object

$db_connection = pg_connect(«host=$host dbname=$db user=$user password=$pass«)

    or die («Could not connect to servern«);

 

//Read the image data from the table

$query = «SELECT image_data FROM tbl_binary_2 WHERE id=1»;

$result = pg_query($db_connection, $query) or die (pg_last_error($db_connection));

$data = pg_fetch_result($result, ‘image_data’);

$cimage = pg_unescape_bytea($data);

 

//Create an image file with the image data retrieved from the table

$filename = «myfile.jpg»;

$image = fopen($filename, ‘wb’) or die(«Unable to open image.»);

fwrite($image, $cimage) or die(«Unable to write data.»);

fclose($image);

pg_close($db_connection);

 

//Display the image in the browser

echo «<img src=»https://linuxhint.com/postgresql-binary-data-type/».$filename.«» height=200 width=300 />»;

?>

The generated image from the image data will appear after executing the above script from the recinto server.

Conclusion:

The purpose of using binary data types and different uses of binary data in PostgreSQL has been shown in this tutorial that will help the new PostgreSQL user to work with the binary data type.



Source link


In today’s tutorial, we will discuss how to disable and enable automatic updates on CentOS 7 using the PackageKit. The tutorial is divided into two parts. In the first part, we will demonstrate how to disable coche updates on CentOS 7. In the second part, we will show you how to enable coche updates. We will use the CentOS command line to perform the tasks. The commands are very easy to follow.

What is PackageKit?

PackageKit is a system developed to make the installation and updating of the software on your computer easier. The primary design goal is to unify all the software graphical tools used in different distributions and to use some of the latest technology like PolicyKit. It is the graphical software updater in the RedHat-based Linux distributions.

To learn more about PackageKit, visit the following page:

https://www.freedesktop.org/software/PackageKit/

Let’s get started with the tutorial!

How to Disable PackageKit on CentOS 7?

Following are the steps involved in disabling PackageKit on CentOS 7:

Step 1: Check the PackageKit Status

Before you begin to disable the automatic updates on CentOS 7, check the status of the PackageKit. It will be active as displayed below. To check the status, execute the following command:

1

systemctl status packagekit

You will see the output like this on your terminal:

Step 2: Stop PackageKit

Before disabling the PackageKit, we first need to stop it as we saw in the previous step that the service is in an active state. This means that it is running. To stop it, run the following command:

1

systemctl stop packagekit

Step 3: Mask PackageKit

In this step, we will mask the Packagekit service. Masking a service prevents the service from being started manually or automatically. To mask the service, run the following command:

1

systemctl mask packagekit

This command will create a symlink from /etc/systemd/system/packagekit.service to /dev/null.

Step 4: Remove PackageKit Software Updater

Now that the PackageKit is completely stopped and disabled, we will now remove it from our system. To do that, issue the following command:

PackageKit will be instantly removed from our system.

How to Enable PackageKit on CentOS 7

Let’s also have a look at how to enable the PackageKit back. The following are the steps involved in enabling the PackageKit on CentOS 7:

Step 1: Reinstall PackageKit

To disable automatic updates, we had to remove the PackageKit. To enable automatic updates, we need to have it in our system again. With the help of the following command, we will install PackageKit back in our system:

1

yum install gnome-packagekit PackageKit-yum

Step 2: Unmask PackageKit

In this step, we will unmask the service. In part 1, we masked it to disable automatic updates. To unmask PackageKit, issue the following command:

1

systemctl unmask packagekit

Step 3: Start PackageKit

Now that the service is unmasked, let’s start it. To start PackageKit, we will run the following command:

1

systemctl start packagekit

Step 4: Verify PackageKit Status

Merienda the service is started, it is in an active state. Let’s verify it. To do that, run the following command to check the status of PackageKit:

1

systemctl status packagekit

The output will tell you that the service is running (active).

Step 5: Enable PackageKIt

Let’s now enable PackageKit. To do that, execute this command:

1

systemctl enable packagekit

Now, your system is back to the old settings. Automatic updates are now enabled on your CentOS 7 machine.

Conclusion

In this guide, we explored how to disable automatic updates on CentOS 7 with the help of PackageKit. We also explored how to enable automatic updates again. CentOS command line was used to disable and enable updates.



Source link