Data ingestion using AWS Services, Part 2
Querying AWS S3 data from AWS Athena using SQL.
AWS Athena is an interactive query service that makes it easy to analyze data on Amazon using standard SQL. In this second part of the tutorial, we are going to crawl the migrated data in AWS S3, create table definitions in the Glue Data Catalog using AWS Glue, and query the data using AWS Athena. AWS Glue is a serverless data integration service that makes it easier to discover, prepare, move, and integrate data from multiple sources for analytics, machine learning (ML), and application development.
Before you proceed with this hands-on tutorial, make sure you have completed the first part of the tutorial, **Data Ingestion using AWS Services Part 1. **Below is an architectural diagram of the full project.
-
Search for and select **AWS Glue **in the top search bar of the AWS console.
-
Click on Crawler and then Create crawler. A crawler accesses your data store (e.g., AWS S3), extracts metadata, and creates table definitions in the AWS Glue Data Catalog.
- Enter a descriptive name for the crawler job and click Next.
- Click on Add data score.
- Under Data source, select S3. Click on Browse S3 to choose the AWS S3 bucket containing the data we want to query. Leave all defaults and click on Add an S3 data source.
- Verify and click on Next.
- Create or select an IAM role under Existing IAM role and click Next.
- Click on Add database under Target database or select a database in the dropdown. Let’s create a database called testdb. Click Create database.
- For frequency, select On demand. This is used to define a time-based schedule for crawlers and jobs in AWS Glue. Click Next.
- Check all settings and click Create crawler.
- After the successful creation of the crawler, click on Run crawler to start the crawler job.
- To check the status of a crawler, click on Crawlers, the name of the crawler, and then Crawler runs.
- Verify the table and the database by clicking on Tables.
-
Search for and select **AWS Athena **in the top search bar of the AWS console.
-
Click Query editor. In the query editor, click Settings, **then **Manage. In the Manage settings, **select **Browse S3 to select an AWS S3 that will serve as the location of the query result. Click on Save.
- In the query editor, enter the following SQL statement:’select **** from testbucketformysqldata123_raw limit 10***’. The query selects all the data migrated into the bucket. Note: Substitute the table name with the name of your table.
- The Query results tab shows the results of the query.
This ends the hands-on project on data ingestion using AWS DMS. Next in the series is SaaS data ingestion using Amazon AppFlow.
Source link
lol