Header Fragment
Logo

A career growth machine

Home All Students Certifications Training Interview Plans Contact Us
  
× Login Plans Home All Students
AI Resume & Interview
Certifications Training
Books
Interview Contact Us
FAQ

Unlimited Learning, One Price
$299 / INR 23,999

All Content for $99 / INR 7,999

Offer valid for the next 3 days.

Subscribe

Chapter 1: Given a data set, load data into Snowflake.-Outline considerations for data loading
Chapter 2: Given a data set, load data into Snowflake.-Define data loading features and potential impact
Chapter 3: Ingest data of various formats through the mechanics of Snowflake.-Required data formats
Chapter 4: Ingest data of various formats through the mechanics of Snowflake.-Outline stages
Chapter 5: Troubleshoot data ingestion.-Identify causes of ingestion errors
Chapter 6: Troubleshoot data ingestion.-Determine resolutions for ingestion errors
Chapter 7: Design, build and troubleshoot continuous data pipelines.-Stages
Chapter 8: Design, build and troubleshoot continuous data pipelines.-Tasks
Chapter 9: Design, build and troubleshoot continuous data pipelines.-Streams
Chapter 10: Design, build and troubleshoot continuous data pipelines.-Snowpipe (for example, Auto ingest as compared to Rest API)
Chapter 11: Create User-Defined Functions (UDFs) and stored procedures including Snowpark
Chapter 12: Design and use the Snowflake SQL API
Chapter 13: Install, configure, and use connectors to connect to Snowflake.
Chapter 14: Design and build data sharing solutions.-Implement a data share
Chapter 15: Design and build data sharing solutions.-Create a secure view
Chapter 16: Design and build data sharing solutions.-Implement row level filtering
Chapter 17: Outline when to use external tables and define how they work.-Partitioning external tables
Chapter 18: Outline when to use external tables and define how they work.-Materialized views
Chapter 19: Outline when to use external tables and define how they work.-Partitioned data unloading
Chapter 24: Scale out as compared to scale up
Chapter 25: Virtual warehouse properties (for example, size, multi-cluster)
Chapter 29: Search optimization service
Chapter 33: Monitor continuous data pipelines.-Tasks
Chapter 35: Implement data recovery features in Snowflake.-Time Travel
Chapter 36: Implement data recovery features in Snowflake.-Fail-safe
Chapter 37: Outline the impact of streams on Time Travel.
Chapter 39: Use system functions to analyze micro-partitions.-Cluster keys
Chapter 40: Use Time Travel and cloning to create new development environments.-Clone objects
Chapter 41: Use Time Travel and cloning to create new development environments.-Validate changes before promoting
Chapter 42: Use Time Travel and cloning to create new development environments.-Rollback changes
Chapter 43: Authentication methods (Single Sign-On (SSO), key pair authentication, username/password, Multi-Factor Authentication (MFA))
Chapter 44: Role Based Access Control (RBAC)
Chapter 45: Column level security and how data masking works with RBAC to secure sensitive data
Chapter 46: The purpose of each of the system defined roles including best practices usage in each case
Chapter 47: The primary differences between SECURITYADMIN and USERADMIN roles
Chapter 48: The difference between the purpose and usage of the USERADMIN/SECURITYADMIN roles and SYSADMIN
Chapter 49: Explain the options available to support column level security including Dynamic Data Masking and external tokenization
Chapter 50: Explain the options available to support row level security using Snowflake row access policies
Chapter 51: Use DDL required to manage Dynamic Data Masking and row access policies
Chapter 52: Use methods and best practices for creating and applying masking policies on data
Chapter 54: Snowpark UDFs (for example, Java, Python, Scala)
Chapter 56: SQL UDFs
Chapter 58: User-Defined Table Functions (UDTFs)
Chapter 59: Secure external functions
Chapter 61: Snowpark stored procedures (for example, Java, Python, Scala)
Chapter 64: Transaction management
Chapter 65: Traverse and transform semi-structured data to structured data
Chapter 70: Perform data transformations using Snowpark (for example, aggregations)

Combo Packages at a Discount: Get one that best fits your learning needs.