Archive d’étiquettes pour : AWS


In AWS, an IAM Role is an AWS identity like an IAM user. AWS IAM service is a very intricate service which, if not configured wisely, can lead to potential security issues. They are attached with policies which decide what this identity is allowed to do and not allowed to do. It is not attached to a single person, but can be assumed by anyone who requires it. Instead of long term credentials (password or access keys) like an IAM user, an IAM role has temporary security credentials. When a user, application, or a service needs access to AWS resources for which they do not hold permissions, they use/assume a specific role for this purpose. Temporary security credentials are then used for this task.

What Will We Cover?

In this guide, we will see how to use the “IAM Passrole” permission. As a specific example, we will see how to connect an EC2 instance with the S3 bucket using the passrole permission.

Important Terms and Concepts

AWS service role: It is a role assumed by a service so that it can perform the tasks on behalf of the user or account holder.

AWS service role for an EC2 instance: It is a role assumed by an application running on an Amazon EC2 instance to perform the tasks in the user account that are allowed by this role.

AWS service-linked role: It is a role that is predefined and directly attached to an AWS service, like the RDS service-linked role for launching a RDS DB.

Using the Passrole Permission to Connect an EC2 Instance with S3

Many AWS services need a role for configuration and this role is passed/administered to them by the user. In this way, services assume/take the role and perform the tasks on behalf of the user. For most services, the role needs to be passed merienda while configuring that service. A user requires permissions for passing a role to an AWS service. This is a good thing from a security point of view since the administrators can control which users can pass a role to a service. The “PassRole” permission is granted by a user to its IAM user, role, or group for passing a role to an AWS service.

To elaborate the previous concept, consider a case when an application running on an EC2 instance requires an access to the S3 bucket. For this, we can attach an IAM role with this instance so that this application gets the S3 permission defined in the role. This application will need the temporary credentials for authentication and authorization purposes. EC2 gets temporary security credentials when a role associates with the instance running our application. These credentials are then made available to our application to access S3.

To grant an IAM user the capability to pass a role to the EC2 service at the time of launching an instance, we need three things:

  1. An IAM permissions policy for the role that decides the scope of the role.
  2. A trust policy attached to the role which allows the EC2 to assume the role and use the permissions defined inside the role.
  3. An IAM permission policy for the IAM user that lists the roles which it can pass.

Let’s do it in a more pragmatic way. We have an IAM user with a limited permission. We then attach an inline policy to launch the EC2 instances and permission to pass an IAM role to a service. Then, we create a Role for S3 access; let’s call it “S3Access”. And attach an IAM policy to it. In this role, we only allow the reading of the S3 data using the AWS managed “AmazonS3ReadOnlyAccess” policy.

Steps to Create the Role

Step 1. From the IAM console of the administrator (root), click on “Role” and then select “Create role”.

Step 2. From the “Select trusted entity” page, select “AWS service” under the “Trusted entity type”.

Step 3. Under the “Use case”, select the radiodifusión button corresponding to the “EC2” for the “Use cases for other AWS services”:

Step 4. On next page, assign an “AmazonS3ReadOnlyAccess” policy:

Step 5. Give a name to your role (“S3Access” in our case). Add a description for this role. The following trust policy is automatically created with this role:

{
    «Version»: «2012-10-17»,
    «Statement»: [
        {
            «Effect»: «Allow»,
            «Action»: [
                «sts:AssumeRole»
            ],
            «Principal»: {
                «Service»: [
                    «ec2.amazonaws.com»
                ]
            }
        }
    ]
}

Step 6. Click on “Create role” to create the role:

IAM Policy for User

This policy gives the IAM user full EC2 permissions and permission to associate the “S3Access” role with the instance.

Step 1. From the IAM console, click on Policies and then on “Create policies”.

Step 2. On the new page, select the json tab and paste the following code:

{
   «Version»: «2012-10-17»,
   «Statement»: [{
    «Effect»:«Allow»,
    «Action»:[«ec2:*»],
    «Resource»:«*»
    },
    {
    «Effect»:«Allow»,
    «Action»:«iam:PassRole»,
    «Resource»:«arn:aws:iam::Account_ID:role/S3Access»
    }]
}

Replace the bolded text “Account_ID” with the user Account ID.

Step 3. (Optional) Give tags for your policy.

Step 4. Put a suitable name for the policy (“IAM-User-Policy” in our case) and click the “Create policy” button and attach this policy to your IAM user.

Attaching the “S3Access” Role to the EC2 Instance

Now, we will attach this role to our instance. Select your instance from the EC2 console and go to “Action > Security > Modify IAM role”. On the new page, select the “S3Access” role from the drop down menu and save it.

Verifying the Setup

Now, we will check if our EC2 instance is able to access our S3 bucket created by the administrator. Login into the EC2 instance and install the AWS CLI application. Now, run the following command on this EC2 instance:

Again, run the previous command from the IAM account configured on your regional machine. You will notice that the command is successfully executed on the EC2 instance but we got an “access denied” error on the regional machine:

The error is obvious because we have only granted the S3 access permission for the EC2 instance but not to the IAM user and to any other AWS service. Another important thing to note is that we did not make the bucket and its objects publicly accessible.

Conclusion

In this guide, we demonstrated how to use the PassRole permission in AWS. We successfully managed to connect the EC2 to S3. It is a very important concept if you care about granting the least privileges to your IAM users.



Source link


With the rise in cloud computing technology, more industries are migrating their workloads to cloud-based infrastructure. As a result of this pattern, technologists have felt the need for some mechanism to automate the process of instance deployment (and other cloud resources). Terraform is one such Open-source tool to facilitate this progress.

What We Will Cover

This article will show how we can create an EC2 instance on AWS using Terraform. We will see an example of installing a simple web server on this instance. Let us first talk a little about the installation of Terraform.

How You Can Install Terraform

Official Terraform packages for various operating systems like Windows, Mac, and Linux-based distros, such as Ubuntu/Debian, CentOS/RHEL, etc., are available. In addition, Terraform also maintains pre-compiled binary and can also be compiled from the source. You can check the various installation procedures on the Terraform website. To verify your Terraform installation, run the following command:

Creating AWS EC2 Instance Using Terraform

After installing Terraform on your system, proceed with creating an EC2 instance on AWS. There are some files to effectively manage a Terraform deployment. Although we can create a single file and declare all the stuff, this approach will make the entire scenario clumsy. So, let us first create a working directory as seen in the following:

Step 1. Start with a folder that will hold all the configuration files. Create the folder, and move inside it as shown in the following:

1

$ mkdir linuxhint-terraform && cd linuxhint-terraform

Step 2. Let us create our first configuration file, “variables.tf”, that contains information about our AWS region and the type of instance we want to use, as shown in the following:

Now, put the below text inside it and save the file as shown in the following:

1
2
3
4
5
6
7
8
9
10

variable «aws_region» {
  description = «The AWS region to deploy the EC2 instance in.»
  default   = «us-east-1»
}

variable «instance_type» {
  description = «instance type for ec2»
  default   =  «t2.micro»
}

Step 3. By default, when Terraform creates a new instance, the default security group associated with the instance denies all the traffic. We will therefore create a new file, “secgrp.tf”, to create a security group, “web-sg”, that will allow the inbound “SSH” and “HTTP” traffic, as well as all outbound traffic, as shown in the following:

Now, put the following code inside it as shown in the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23

resource «aws_security_group» «web-sg» {
  name = “new-secgrp”
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = «tcp»
    cidr_blocks = [«0.0.0.0/0»]
  }

  ingress {
    from_port   = 22
    to_port     = 22
    protocol    = «tcp»
    cidr_blocks = [«0.0.0.0/0»]
  }

  egress {
        from_port   = 0
    to_port     = 0
    protocol    = «-1»
    cidr_blocks = [«0.0.0.0/0»]
  }
}

Step 4. Create a “main.tf” file that will define the desired infrastructure as shown in the following:

Now, put the following configuration inside it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68

terraform {

  required_providers {
    aws = {
    source  = «hashicorp/aws»

    version = «~> 3.27»

    }
  }

  required_version = «>= 0.14.9»

}

provider «aws» {

  region                = var.aws_region
  shared_credentials_file = «/home/User_Name/.aws/credentials»
  profile               = «profile1»

}

resource «aws_instance» «webserver» {

  ami = «ami-09d56f8956ab235b3»
  instance_type = var.instance_type
  key_name = «EC2-keyPair-Name»
 vpc_security_group_ids = [aws_security_group.web-sg.id]
associate_public_ip_address = true
  root_block_device {
    volume_type = «gp2»
    volume_size = «30»
    delete_on_termination = false

}

 

  user_data = <<EOF

#!/bin/bash

sudo apt-get update

sudo apt-get upgrade -y

sudo apt-get install apache2 -y

sudo systemctl restart apache2

sudo chmod 777 -R /var/www/html/

cd /var/www/html/

sudo echo «<h1>This is our test website deployed using Terraform.</h1>» > index.html

EOF

  tags = {
    Name = «ExampleEC2Instance»
  }
}

output «IPAddress» {
  value = «${aws_instance.webserver.public_ip}«
}

In the previous code, do not forget to change the “User-Name” to your system user’s name and the “EC2-keyPair-Name” to the name of the key pair in your case. Let us see a bit about the parameters used in the above files:

aws_instance: This creates an EC2 instance resource. Instances can be created, changed, and destroyed

AMI: Specify the AMI id to be used with the EC2 instance

instance_type: This option is used to declare the type of the instance to be used

key_name: Specifies the name of the key pair to use with the EC2 instance

vpc_security_group_ids: An argument for a list of security group IDs to attach

associate_public_ip_address: Specify whether to attach public IP with an instance inside a VPC

user_data: Used for passing commands/data on an instance when launching it

Now, initialize Terraform by running the following command:

Now, apply the changes using the following command:

Verifying the Procedure

Now, let us check whether the desired EC2 instance is created. Head to the EC2 console and check for the running instances as shown in the following image:

Since our instance was created successfully, we will now see if the website we deployed is working correctly or not. Copy the DNS name or public IP of the instance and enter it inside a web browser as shown in the following:

Well done! Our web server is working nicely.

Cleaning up the Resources

When you have tested your infrastructure or when you do not require it, clean up the resources by running the following command:

Conclusion

This guide taught us about creating an EC2 instance on AWS using Terraform. We have also demonstrated how to provision a simple AWS web server using Terraform.



Source link

AWS provides a supuesto private cloud (VPC) service for creating a logically isolated supuesto network in the cloud. Here, we can launch EC2 and RDS instances and create security groups and other resources. Like many other tasks, we can also create a VPC using Terraform.

What We Will Cover

This guide will show how to create an AWS VPC (Posible Private Cloud) using Terraform.

What You Will Need

  1. AWS account
  2. Access to the internet
  3. Basics of Terraform

Creating AWS VPC Using Terraform

Now that we have installed Terraform on our locorregional machine, we can continue our task of working with VPC. Here, we have outlined the VPC setup for our case:

We have one private and one public subnet with their corresponding route table. The public subnet also has a NAT gateway attached to it. The Terraform configuration for different components is stored in different files as:

  1. variables.tf: Definition of variables used in the files
  2. vpc.tf: For VPC resource
  3. gateway.tf: For Gateway resources
  4. subnets.tf: For defining public and private subnets
  5. route-table.tf: For public and private route table resources
  6. main.tf

As mentioned earlier, Terraform uses several configuration files for provisioning resources, and each of these files must reside in their respective working folder/directory. Let us create a directory for this purpose:

Step 1. Create a folder that will hold your configuration files, and then navigate to this folder:

1 $ mkdir linuxhint-terraform && cd linuxhint-terraform

Step 2. Let us create our first configuration file, “variables.tf”, that will contain information about our AWS region and the type of instance we want to use:

Now, put the following text inside it, and save the file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
variable «aws_region» {
description = «The AWS region to create the VPC in.»
default   = «us-east-1»
}

variable «vpc-cidr» {
cidr_block = «172.168.0.0/16»
}

variable «pubsubcidr» {
cidr_block = «172.168.0.0/24»
}

variable «prisubcidr» {
cidr_block = «172.168.1.0/24»
}

Step 3. Create vpc.tf:

Now, put the following text inside it, and save the file:

1
2
3
4
5
resource «aws_vpc» «my-vpc» {

cidr_block = var.vpc-cidr

}

Step 4. Create gateway.tf file and define internet gateway and NAT gateway here:

Now, put the following text inside it, and save the file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Create Internet Gateway resource and attach it to the VPC

resource «aws_internet_gateway» «IGW» {

vpc_id =  aws_vpc.my-vpc.id

}

# Create EIP for the IGW

resource «aws_eip» «myEIP» {
vpc   = true
}

# Create NAT Gateway resource and attach it to the VPC
resource «aws_nat_gateway» «NAT-GW» {
allocation_id = aws_eip.myEIP.id
subnet_id = aws_subnet.mypublicsubnet.id
}

Step 5. Create subnets.tf for the private and public subnets inside the VPC:

Now, put the following text inside it, and save the file:

1
2
3
4
5
6
7
8
9
resource «aws_subnet» «myprivatesubnet» {
vpc_id =  aws_vpc.my-vpc.id
cidr_block = var.prisubcidr
}

resource «aws_subnet» «mypublicsubnet» {
vpc_id =  aws_vpc.my-vpc.id
cidr_block = var.pubsubcidr
}

Step 6. Create route-table.tf for private and public subnets:

Now, put the following text inside it, and save the file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Creating RT for Private Subnet

resource «aws_route_table» «privRT» {
vpc_id = aws_vpc.my-vpc.id
route {
cidr_block = «0.0.0.0/0»
nat_gateway_id = aws_nat_gateway.NAT-GW.id
}
}

# Creating RT for Public Subnet
resource «aws_route_table» «publRT» {
vpc_id =  aws_vpc.my-vpc.id
route {
cidr_block = «0.0.0.0/0»
gateway_id = aws_internet_gateway.IGW.id
}
}
#Associating the Public RT with the Public Subnets
resource «aws_route_table_association» «PubRTAss» {
subnet_id = aws_subnet.mypublicsubnet.id
route_table_id = aws_route_table.publRT.id
}
#Associating the Private RT with the Private Subnets
resource «aws_route_table_association» «PriRTAss» {
subnet_id = aws_subnet.myprivatesubnet.id
route_table_id = aws_route_table.privRT.id
}

Step 7. Make a “main.tf” file that will contain the definition for our infrastructure:

Now, put the following configuration inside it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
terraform {
required_providers {
aws = {
source  = «hashicorp/aws»

version = «~> 3.27»
}
}

required_version = «>= 0.14.9»

}

provider «aws» {
region                = var.aws_region
shared_credentials_file = «/home/User_Name/.aws/credentials»
profile               = «profile1»
}

Modify the “User_Name” in the above code with the username in your case. Let us see a bit about the parameters used in the previous files:

  • shared_credentials_file: It is the path of the file containing the credentials of the AWS users.
  • profile: It specifies the user’s profile to be used for working with AWS.
  • aws_vpc: Resource for building a VPC.
  • cidr_block: Provides an IPv4 CIDR block for the VPC.
  • aws_internet_gateway: Resource for creating an internet gateway for the VPC.
  • aws_eip: Resource for producing an Elastic IP (EIP).
  • aws_nat_gateway: Resource for creating a NAT gateway for the VPC.
  • Allocation_id: Attribute for allocation id of the above-generated EIP.
  • subnet_id: Attribute for subnet id of the subnet where NAT gateway is deployed.
  • aws_subnet: Resource for creating a VPC subnet.
  • aws_route_table: Resource for creating a VPC route table.
  • route: Argument that contains a list of route objects.
  • nat_gateway_id: Argument denoting the ID of the VPC NAT gateway.
  • gateway_id: Optional argument for VPC internet gateway.
  • aws_route_table_association: Resource for creating an association between route table (public or private) and 1) internet gateway and 2) supuesto private gateway.
  • route_table_id: The route table ID with which we are associating the subnet.

Initializing the Terraform Directory

To download and install the provider we defined in our configuration and other files, we need to initialize the directory containing this file:

Building the Infrastructure

To apply the changes we planned above, run the following command:

Enter “yes” on the terminal when prompted.

Verifying the Procedure

Now, let us check if the desired VPC is created or not. Head to the VPC console, and check for the available VPCs:

We can see that our VPC is created successfully.

After you have done performing this task, delete the resources to avoid unnecessary charges:

Enter “yes” to apply the action.

Conclusion

In this guide, we have learned about creating a VPC on AWS using Terraform. The next thing that you can do is try to provision an RDS or EC2 instance using Terraform.

Source link