Header Fragment
Logo

A career growth machine

Home All Students Certifications Training Interview Plans Contact Us
  
× Login Plans Home All Students
AI Resume & Interview
Certifications Training
Books
Interview Contact Us
FAQ

Unlimited Learning, One Price
$299 / INR 23,999

All Content for $99 / INR 7,999

Offer valid for the next 3 days.

Subscribe

Chapter 1: Data formats and ingestion mechanisms (validated and non-validated formats, Apache Parquet, JSON, CSV, Apache ORC, Apache Avro, RecordIO)
Chapter 3: How to use AWS streaming data sources to ingest data ( Kinesis, Apache Flink, Apache Kafka)
Chapter 4: AWS storage options, including use cases and tradeoffs
Chapter 5: Extracting data from storage (S3, Elastic Block Store [EBS], EFS, RDS, DynamoDB) by using relevant AWS service options ( S3 Transfer Acceleration, EBS Provisioned IOPS)
Chapter 6: Choosing appropriate data formats (Parquet, JSON, CSV, ORC) based on data access patterns
Chapter 8: Merging data from multiple sources (by using programming techniques, AWS Glue, Apache Spark)
Chapter 9: Troubleshooting and debugging data ingestion and storage issues that involve capacity and scalability
Chapter 10: Making initial storage decisions based on cost, performance, and data structure
Chapter 11: Data Preparation for Machine Learning (ML)-Transform data and perform feature engineeringnowledge of: Data cleaning and transformation techniques (detecting and treating outliers, imputing missing data, combining, deduplication)
Chapter 12: Feature engineering techniques (data scaling and standardization, feature splitting, binning, log transformation, normalization)
Chapter 13: Encoding techniques (one-hot encoding, binary encoding, label encoding, tokenization)
Chapter 14: Tools to explore, visualize, or transform data and features (SageMaker Data Wrangler, AWS Glue, AWS Glue DataBrew)
Chapter 16: Data annotation and labeling services that create high-quality labeled datasets
Chapter 17: Transforming data by using AWS tools (AWS Glue, AWS Glue DataBrew, Spark running on EMR, SageMaker Data Wrangler)
Chapter 18: Creating and managing features by using AWS tools (SageMaker Feature Store)
Chapter 20: Pre-training bias metrics for numeric, text, and image data (class imbalance [CI], difference in proportions of labels [DPL])
Chapter 21: Strategies to address CI in numeric, text, and image datasets (synthetic data generation, resampling)
Chapter 24: Implications of compliance requirements (personally identifiable information [PII], protected health information [PHI], data residency)
Chapter 25: Validating data quality (by using AWS Glue DataBrew and AWS Glue Data Quality)
Chapter 26: Identifying and mitigating sources of bias in data (selection bias, measurement bias) by using AWS tools (SageMaker Clarify)
Chapter 27: Preparing data to reduce prediction bias (by using dataset splitting, shuffling, and augmentation)
Chapter 28: Configuring data to load into the model training resource ( EFS, FSx)
Chapter 29: Capabilities and appropriate uses of ML algorithms to solve business problems
Chapter 30: How to use AWS artificial intelligence (AI) services (for example, Amazon Translate, Amazon Transcribe, Amazon Rekognition, Amazon Bedrock) to solve specific business problems
Chapter 33: Assessing available data and problem complexity to determine the feasibility of an ML solution
Chapter 34: Comparing and selecting appropriate ML models or algorithms to solve specific problems
Chapter 35: Choosing built-in algorithms, foundation models, and solution templates (for example, in SageMaker JumpStart and Amazon Bedrock)
Chapter 39: Methods to reduce model training time (for example, early stopping, distributed training)
Chapter 44: Model hyperparameters and their effects on model performance (for example, number of trees in a tree-based model, number of layers in a neural network)
Chapter 46: Using SageMaker built-in algorithms and common ML libraries to develop ML models
Chapter 48: Using custom datasets to fine-tune pre-trained models (for example, Amazon Bedrock, SageMaker JumpStart)
Chapter 49: Performing hyperparameter tuning (for example, by using SageMaker automatic model tuning [AMT])
Chapter 51: Preventing model overfitting, underfitting, and catastrophic forgetting (for example, by using regularization techniques, feature selection)
Chapter 52: Combining multiple training models to improve performance (for example, ensembling, stacking, boosting)
Chapter 54: Managing model versions for repeatability and audits (for example, by using the SageMaker Model Registry)
Chapter 55: Model evaluation techniques and metrics (for example, confusion matrix, heat maps, F1 score, accuracy, precision, recall, Root Mean Square Error [RMSE], receiver operating characteristic [ROC], Area Under the ROC Curve [AUC])
Chapter 58: Metrics available in SageMaker Clarify to gain insights into ML training data and models
Chapter 60: Selecting and interpreting evaluation metrics and detecting model bias
Chapter 63: Comparing the performance of a shadow variant to the performance of a production variant
Chapter 66: Deployment best practices (for example, versioning, rollback strategies)
Chapter 70: Model and endpoint requirements for deployment endpoints (for example, serverless endpoints, real-time endpoints, asynchronous endpoints, batch inference)
Chapter 72: Methods to optimize models on edge devices (for example, SageMaker Neo)
Chapter 74: Choosing the appropriate compute environment for training and inference based on requirements (for example, GPU or CPU specifications, processor family, networking bandwidth)
Chapter 77: Selecting the correct deployment target (for example, SageMaker endpoints, Kubernetes, Amazon Elastic Container Service [Amazon ECS], Amazon Elastic Kubernetes Service [Amazon EKS], Lambda)
Chapter 78: Choosing model deployment strategies (for example, real time, batch)
Chapter 81: Tradeoffs and use cases of infrastructure as code (IaC) options (for example, AWS CloudFormation, AWS Cloud Development Kit [AWS CDK])
Chapter 83: How to use SageMaker endpoint auto scaling policies to meet scalability requirements (for example, based on demand, time)
Chapter 84: Applying best practices to enable maintainable, scalable, and cost-effective ML solutions (for example, automatic scaling on SageMaker endpoints, dynamically adding Spot Instances, by using Amazon EC2 instances, by using Lambda behind the endpoints)
Chapter 85: Automating the provisioning of compute resources, including communication between stacks (for example, by using CloudFormation, AWS CDK)
Chapter 88: Deploying and hosting models by using the SageMaker SDK
Chapter 89: Choosing specific metrics for auto scaling (for example, model latency, CPU utilization, invocations per instance)
Chapter 94: Deployment strategies and rollback actions (for example, blue/green, canary, linear)
Chapter 97: Applying continuous deployment flow structures to invoke pipelines (for example, Gitflow, GitHub Flow)
Chapter 98: Using AWS services to automate orchestration (for example, to deploy ML models, automate model building)
Chapter 99: Configuring training and inference jobs (for example, by using Amazon EventBridge rules, SageMaker Pipelines, CodePipeline)
Chapter 100: Creating automated tests in CI/CD pipelines (for example, integration tests, unit tests, end-to-end tests)
Chapter 103: Techniques to monitor data quality and model performance
Chapter 105: Monitoring models in production (for example, by using SageMaker Model Monitor)
Chapter 106: Monitoring workflows to detect anomalies or errors in data processing or model inference
Chapter 107: Detecting changes in the distribution of data that can affect model performance (for example, by using SageMaker Clarify)
Chapter 109: Key performance metrics for ML infrastructure (for example, utilization, throughput, availability, scalability, fault tolerance)
Chapter 110: Monitoring and observability tools to troubleshoot latency and performance issues (for example, AWS X-Ray, Amazon CloudWatch Lambda Insights, Amazon CloudWatch Logs Insights)
Chapter 112: Differences between instance types and how they affect performance (for example, memory optimized, compute optimized, general purpose, inference optimized)
Chapter 113: Capabilities of cost analysis tools (for example, AWS Cost Explorer, AWS Billing and Cost Management, AWS Trusted Advisor)
Chapter 114: Cost tracking and allocation techniques (for example, resource tagging)
Chapter 117: Setting up dashboards to monitor performance metrics (for example, by using Amazon QuickSight, CloudWatch dashboards)
Chapter 118: Monitoring infrastructure (for example, by using EventBridge events)
Chapter 119: Rightsizing instance families and sizes (for example, by using SageMaker Inference Recommender and AWS Compute Optimizer)
Chapter 120: Monitoring and resolving latency and scaling issues
Chapter 121: Preparing infrastructure for cost monitoring (for example, by applying a tagging strategy)
Chapter 122: Troubleshooting capacity concerns that involve cost and performance (for example, provisioned concurrency, service quotas, auto scaling)
Chapter 123: Optimizing costs and setting cost quotas by using appropriate cost management tools (for example, AWS Cost Explorer, AWS Trusted Advisor, AWS Budgets)
Chapter 124: Optimizing infrastructure costs by selecting purchasing options (for example, Spot Instances, On-Demand Instances, Reserved Instances, SageMaker Savings Plans)
Chapter 128: Security best practices for CI/CD pipelines
Chapter 130: Configuring IAM policies and roles for users and applications that interact with ML systems
Chapter 132: Troubleshooting and debugging security issues
Chapter 133: Building VPCs, subnets, and security groups to securely isolate ML systems

Combo Packages at a Discount: Get one that best fits your learning needs.