CLOUD

Trumpeting to Grand Central: Monitor and Deploy Cloud-Based Accounts

Going to the cloud is quick and easy, or so they say. Have you ever tried to set up some very relevant services in Amazon Web Services, like AWS CloudTrail, Amazon Config and VPC Flow Logs? Configuring each server in multiple regions across multiple accounts can take endless hours and even with automation you can miss an account within an organization. 

What if I told you there is a better way to manage, monitor and deploy hundreds of cloud-based accounts and still leverage any automation or frameworks that you’ve already built? Welcome to Grand Central, a feature within the Splunk App for Infrastructure (SAI). This solution takes an infrastructure-as-code approach to setup and configure your cloud-based platforms. In a matter of minutes, you can use automation to deploy AWS CloudTrail across hundreds of accounts and multiple regions. This solution is backwards compatible with AWS Landing Zone and AWS Control Tower

This project came from hearing from many of our customers and partners that wanted a simplified way to set up AWS and then configure it to send data from AWS services to Splunk. I wanted a process that was scalable and required only a few people. This would lead to faster customer adoption and understanding, and help new customers experience value from their data sooner.

How Does This App Work?

Grand Central, much like the name suggests, is a way to deploy from a central location that can collect data from various accounts and regions. The first phase of this solution is focused on collecting data from Amazon Web Services and sending the data serverlessly into Splunk using Kinesis Data Firehose. It also leverages AWS Organizations to discover all sub-accounts and add them into management through the Splunk UI. Finally, we have developed a simple script that can take all those credentials.csv files and automatically uses them to configure all the credentials necessary to collect your AWS data. 

Let’s get started by setting up an account in the master organization that has the ability to list all the sub-accounts within the organization. Here is a copy of the IAM Policy they would need to view the data:

IAM Policy - Grand_Central_Lister_Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "organizations:ListAccounts",
            "Resource": "*"
        }
    ]
}


Next, you will need to give this following policy to your admins within the subaccounts to grant Grand Central access to setup and deploy data collection:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "lambda:CreateFunction",
                "iam:GetAccountPasswordPolicy",
                "kinesis:Get*",
                "iam:CreateRole",
                "s3:CreateBucket",
                "iam:AttachRolePolicy",
                "lambda:GetFunctionConfiguration",
                "iam:PutRolePolicy",
                "kinesis:ListStreams",
                "s3:GetObjectAcl",
                "iam:DetachRolePolicy",
                "logs:GetLogEvents",
                "lambda:DeleteFunction",
                "events:RemoveTargets",
                "s3:GetBucketPolicyStatus",
                "events:PutEvents",
                "iam:GetRole",
                "events:DescribeRule",
                "firehose:CreateDeliveryStream",
                "iam:GetAccessKeyLastUsed",
                "iam:DeleteRole",
                "cloudformation:*",
                "firehose:DescribeDeliveryStream",
                "s3:GetObject",
                "sts:AssumeRole",
                "s3:GetLifecycleConfiguration",
                "s3:GetBucketTagging",
                "logs:DescribeLogStreams",
                "s3:GetBucketLogging",
                "events:PutRule",
                "s3:ListBucket",
                "s3:GetAccelerateConfiguration",
                "iam:CreateUser",
                "s3:GetBucketPolicy",
                "firehose:DeleteDeliveryStream",
                "iam:PassRole",
                "sns:Get*",
                "sns:Publish",
                "iam:DeleteRolePolicy",
                "s3:DeleteBucket",
                "s3:PutBucketVersioning",
                "iam:ListAccessKeys",
                "s3:GetBucketPublicAccessBlock",
                "logs:DescribeLogGroups",
                "kinesis:DescribeStream",
                "iam:DeleteUser",
                "sns:List*",
                "events:PutTargets",
                "events:DeleteRule",
                "s3:ListAllMyBuckets",
                "s3:GetBucketCORS",
                "iam:ListUsers",
                "iam:GetUser",
                "s3:GetBucketLocation"
             ],
            "Resource": "*"
        }
    ]


Let’s start by adding your master account that you just created: 

Make sure to put your AWS Account ID (Numbers only) in the first field. The second field can be a string that you will use to identify the account. Finally, select your Cloud Account Type, the Access and Secret Key. 

With your Master listing account added, your console should look something like this: 

You can now list all your accounts and validate that your account has the necessary permissions to run the list command:

 

The list of accounts will give you all the same information collected from the AWS REST API call: Go back to the Grand Central Accounts view and now we are going to add the accounts under management within Splunk: Click the Add Button:

Through a small act of magic, your accounts will show up under management in Grand Central:

Now you have all these accounts ready, let’s configure the credentials collect the data from each account. Usually most admins have these files in a vault of some kinds or, if you’re like me, they are littered all around your Downloads directory:

Run this handy ‘credential_smusher.py’ python script and it will take all of these credential files and compact them into a single json file (all_account_credentials.json). 

Go back to your Grand Central dashboard and click on the Bulk Credential Upload. Now take that all_account_credentials.json file and upload it:

Now take that all_account_credentials.json file and upload it.

You should now have all the credentials associated with all of your accounts under management in Grand Central: Before you can deploy, you’ll need a target to send all of your data. This step you will create an HTTP Event Collector (HEC) endpoint. This endpoint needs to be an Amazon Kinesis Data Firehose enabled endpoint. If you’re using Splunk Cloud, you should have an endpoint that looks like this: http-inputs-firehose-<your_customer_name>.splunkcloud.com:443

If you are using a Splunk Deployment that you setup yourself on AWS, then you will need to setup your firehose enabled endpoint following these steps: <link_to_kdf_docs>. Click on “New Splunk Account” and follow the prompts:

Splunk Account name can be anything, I would recommend using it to call out what data is being sent into that HEC endpoint.

With the Accounts setup, credentials configured and HEC endpoint enabled, you are now ready to deploy the infrastructure code to start collecting data in Splunk.

Click on Bulk Data Deployment and follow the prompts:

Select all the AWS accounts you want to collect the data from and give this job a Deployment Name. This name cannot contain spaces and will be used as the name for the cloudformation deployment.

Select your regions and the Splunk Account you want to send your data into:

Finally, click Deploy and start the process for data collection:

Now your Observation you will be able to see the successful deployments:

If you were to try and validate this in Amazon Web Services console, it would look something like this:

I learned a great deal from working on this project these past few months. First, there is significant demand for this functionality beyond cloud data sources. I also learned that you should always test your projects on what might be considered ‘fringe use cases,’ because inevitably, that will become the biggest use case for your project. 

As this project goes from being a stand-alone project to being a part of the Splunk App for Infrastructure, I’m looking forward to working with our product team to make this a frictionless process of getting data in for our customers. This enhancement is long-awaited, especially for our customers who are “all-in” on cloud and only growing their presence there. The app is available today through Github: https://github.com/amiracle/grand_central.

----------------------------------------------------
Thanks!
Kam Amir

Splunk
Posted by

Splunk

TAGS
Show All Tags
Show Less Tags