DronaBlog

Showing posts with label Snowflake. Show all posts
Showing posts with label Snowflake. Show all posts

Friday, June 9, 2023

How to Fix the HTTP Response Code 422 Error in Snowflake with a QueryFailureStatus Object

 Snowflake, the cloud data platform, offers powerful data warehousing and analytics capabilities. However, like any complex system, it may encounter errors from time to time. One such error is the HTTP response code 422, which is accompanied by a QueryFailureStatus object. In this article, we will explore the causes of this error and provide steps to resolve it effectively.



The HTTP response code 422 in Snowflake signifies an Unprocessable Entity error. It indicates that the server understands the request made by the client but cannot process it due to a semantic error. When this error occurs, Snowflake provides additional details in the form of a QueryFailureStatus object, which contains information about the error message, error code, and any associated stack traces.






To fix the HTTP response code 422 error in Snowflake, follow these steps:

  1. Analyze the Error Message: Begin by carefully examining the error message provided by the QueryFailureStatus object. It often contains valuable insights into the specific issue encountered during the query execution.
  2. Review Query Syntax and Semantics: Verify the syntax and semantics of the SQL query that triggered the error. Check for any typos, missing or extra quotation marks, incorrect column or table names, or other similar issues that may cause the query to fail.
  3. Validate Data Types: Ensure that the data types of the columns being used in the query are appropriate for the intended operations. Mismatched data types can lead to semantic errors and trigger the HTTP response code 422.
  4. Check for Data Integrity Issues: Examine the data being queried for any anomalies or inconsistencies. Data integrity problems, such as missing or duplicate values, can interfere with query execution and result in the 422 error.
  5. Verify Access Privileges: Confirm that the user executing the query has the necessary privileges to perform the requested operations. Lack of appropriate permissions can lead to semantic errors and trigger the HTTP response code 422.
  6. Review Query Execution Settings: Snowflake provides various query execution settings that can affect the outcome of the query. Check if any specific settings, such as result size limits or time limits, are impacting the query execution. Adjusting these settings appropriately may help resolve the error.
  7. Utilize Snowflake Documentation and Community: Leverage the extensive documentation and community resources available for Snowflake. The official Snowflake documentation offers detailed information about error codes, troubleshooting steps, and best practices. Participating in the Snowflake community forums or reaching out to Snowflake support can also provide valuable insights and guidance.
  8. Test the Query in a Sandbox Environment: If possible, replicate the error in a non-production or sandbox environment. This allows you to experiment with different solutions without the risk of impacting live data. It can help isolate the cause of the error and verify the effectiveness of potential fixes.
  9. Collaborate with Colleagues: Engage with your colleagues, especially those experienced in Snowflake, to seek their expertise. They may have encountered similar issues in the past or possess insights that can assist in resolving the HTTP response code 422 error.




  10. Implement Fixes Incrementally: When attempting to fix the error, it is generally advisable to implement changes incrementally rather than making multiple modifications simultaneously. This approach allows you to identify the specific fix that resolves the issue and minimizes the chances of introducing new problems.
  11. Retest and Monitor: After applying a potential fix, retest the query to confirm that the HTTP response code 422 error no longer occurs. Monitor subsequent query executions to ensure the error does not resurface and that the fix does not have any adverse effects on other aspects of the system.


By following these steps, you can effectively troubleshoot and fix the HTTP response code 422 error in Snowflake. Remember to approach the error resolution process systematically, leveraging available resources, and seeking assistance when needed. With perseverance and careful analysis, you can overcome this error and continue to harness the power of Snowflake for your data analytics needs.


Learn more about Snowflake here



Wednesday, January 18, 2023

Snowflake Interview Questions and Answers - Part I

 If you are preparing for Snowflake Interview and looking for interview questions and answers then you reached right place. In this article, we will discuss Snowflake Interview Questions and Answers.






1. What is Snowflake and what are its key features?

Answer: Snowflake is a cloud-based data warehousing platform that has a number of key features, including a SQL-based query language, a multi-cluster, shared-data architecture, and support for both structured and semi-structured data.


2. How does Snowflake differ from other data warehousing solutions?

Answer: Snowflake is unique in its ability to scale computing and storage independently, its support for both structured and semi-structured data, and its built-in support for data sharing and time travel.


3. Can you explain the concept of a "virtual warehouse" in Snowflake?

Answer: A virtual warehouse in Snowflake is a set of resources that is used to execute queries. It includes a specified number of computing clusters and a specified amount of storage.


4. How does Snowflake handle concurrency and query performance?

Answer: Snowflake uses a multi-cluster, shared-data architecture to handle concurrency and query performance. Queries are automatically routed to the appropriate compute cluster based on the data being accessed and the resources available.


5. How does Snowflake handle data security?

Answer: Snowflake provides a number of security features, including data encryption, secure data sharing, and row-level security. It also integrates with external security systems such as Azure AD, Okta, and more.


6. How does Snowflake handle data loading and ETL?

Answer: Snowflake supports a variety of data loading and ETL options, including bulk loading using the COPY command, streaming data using the PUT command, and using Snowpipe for near real-time loading of data.


7. Can you explain the concept of "time travel" in Snowflake?

Answer: Time travel in Snowflake allows you to query historical versions of a table or view as it existed at a specific point in time in the past. This feature enables you to recover data that has been deleted or to compare data as it existed at different points in time.


8. How does Snowflake handle data unloading and backup?

Answer: Snowflake supports unloading data to external stages such as Amazon S3, Azure Blob Storage, and Google Cloud Storage, using the UNLOAD command. It also has a feature called "Snowflake Backup" which is a fully managed, automated backup service that enables point-in-time recovery.


9. How does Snowflake handle data archival and retention?

Answer: Snowflake supports data archival and retention through the use of "time travel" and "data retention" policies. The former allows you to easily access historical versions of data, while the latter allows you to automatically delete data that is no longer needed.


10. Can you explain how Snowflake handles data sharing?

Answer: Snowflake allows for secure data sharing through the use of "shares." A share is a specific set of data that can be shared with other Snowflake accounts or users. Shared data remains in the original account and is accessed through a secure, read-only connection.






11. What are the different types of Snowflake accounts and what are their use cases?

Answer: There are three types of Snowflake accounts: standard, enterprise, and virtual private. Standard accounts are suitable for small to medium-sized businesses and are the most cost-effective option. Enterprise accounts are designed for larger businesses with more demanding requirements, and virtual private accounts are for organizations that require a fully isolated, private deployment of Snowflake.


12. How does Snowflake handle data replication and disaster recovery?

Answer: Snowflake uses a multi-cluster, shared-data architecture, which allows for automatic data replication across multiple availability zones. This provides built-in disaster recovery capabilities and ensures high availability of data. Additionally, Snowflake also has a feature called "Geo-Replication" which allows to replicate data between regions.


13. Can you explain how Snowflake handles data Governance?

Answer: Snowflake provides a comprehensive set of data governance features, including data lineage, data catalog, and data auditing. Data lineage shows the flow of data through various stages, data catalog allows to discover and understand the data, and data auditing provides insight into who accessed and modified data and when.


14. What are the different types of Snowflake storage options and their use cases?

Answer: Snowflake offers three types of storage options: transient, persistent, and secure. Transient storage is used for temporary data that is not needed for long-term retention, persistent storage is used for data that needs to be retained for longer periods of time, and secure storage is used for data that requires additional security and encryption.






15. Can you explain the concept of "data cloning" in Snowflake?

Answer: Data cloning in Snowflake allows to create a copy of a table or a set of tables with minimal impact on the performance of the source table. The clone can be used for testing, reporting, or other purposes without affecting the original data. Data cloning can be done using COPY INTO, CREATE TABLE AS SELECT (CTAS) or using the Snowflake Data Clone feature.


Learn more about Snowflake here




Tuesday, August 9, 2022

What are top cloud data warehouses ?

            Are you looking for the top cloud data warehouses in the current market? Are you interested in knowing which cloud data warehouse is the preferred one? If so, then you reached the right place. Let's discuss top cloud data warehouses.





A] What is a cloud data warehouses?

            A database stored as managed service and provided as software-as-a-service (i.e SaaS) to perform analytics and business intelligence operations in a public cloud is called a cloud data warehouses.

            In some cases these can be private cloud provider services.


B] What are the cloud data warehouses? 

             Following are the currently available top cloud Data warehouses -

           1. Azure Synapse Analytics 

           2. Amazon Redshift 

           3. Google BigQuery

           4. Azure SQL database 

           5. Snowflake

           6. Azure Cosmos DB+






C] Which could Data Warehouse should we use?

    The use of Cloud Data  Warehouse depends on the business use case. However, currently, Snowflake Cloud Data Warehouse is commonly used due to the ease and performance it provides compared to other Cloud Datawarehouse.


Learn more about Snowflake Cloud Datawarehouse here.



Sunday, March 27, 2022

How does secure data sharing work in snowflake ?

               Are you looking for the details about how data sharing works in snowflake? Are you also interested in knowing what are things we can share in snowflake? If so, then you reached the right place. In this article, we will explore data sharing in Snowflake. Let's start.


A] What are the database objects we can share?

              The following snowflake database objects can be shared -

              a) External tables 

              b) Tables 

              c) Secure views 

              d) Secure UDFs

              e) Secure materialized views

             The objects which are shared are in Read-only. i.e we cannot add, update or delete data or objects.






B] How does secure data sharing work? 

             Secure data sharing is a feature of snowflake using which data is shared with consuming systems.

            Following are important points about secure data sharing.

           1. No actual data is copied or transferred

           2. Sharing achieved using snowflake unique services layer and metadata store 

           3. No storage is needed on the consumer side

           4. No monthly charges to the consumer for storage 

           5. Consumers need to pay for  executing queries i.e for using compute resources 





       How does Data Sharing work in Snowflake? 

           a) Secure data share comes with the provider and consumer concepts.

           b) Provider creates a shared database and grants access to specific objects in the database 

           c) Provider has capabilities to share from multiple databases if these databases are from the same account 

          d) The read-only database is created on the consumer side from the share. Access to this database is configurable using standard role-based access control.


Learn more about snowflake here -



Sunday, January 23, 2022

What Account Identifier in Snowflake ?

              Are you looking for an article about what is Account Identifier in Snowflake is? Are you also interested in knowing what is the format for Account Identifier? If so, then you reached the right place. In this article, we will also learn about Organization names and Account names.


A) What is an Account Identifier in Snowflake?

            The unique identifier which uniquely identifies the Snowflake account within a business entity and throughout the global network of snowflake is called Account Identifier.

            Here the global network of Snowflake comprises of supported cloud regions and cloud platforms.






B) What are the uses of Account Identifier?

              Following are important use cases where Account Identifier plays a vital role -

           1) Account identifier is used in URLs for accessing the Snowflake web interface.

           2) Account identifier is also used for connecting to Snowflake using drivers , snowSQL,                               connectors, and other clients. 

          3) It is also used in 3rd party applications which are part of the snowflake ecosystem.

          4) Account identifies is required for secure Data sharing, Database replications, and Failover/failback features.

          5) It is also used in interactions with external systems and securing snowflake internal operations.


C) How to identify Snowflake Account? 

             We can identify snowflake accounts using two ways -

          1) Using given name in Organization

          2)  Using snowflake assigned locator 

        for identifying accounts using names in the organization, the ORGADMIN role must be created.


D) Identifying Account using Name in Organization

            An organization is the first-class snowflake object and it is linked to the accounts owned by the business entity. The organization provides capabilities to administrators i.e users with ORGADMIN roles to create, view, and manage all accounts across different regions and cloud platforms.

          Important points : 

             1) Account name must be unique within the organization.

            2) Account name is not unique across organizations

            3) Account names with underscores also have dashed versions.   

     Account name as Account identifier format -

            1) <organization _name > - <account _name>

            2) <organization _name> _ <Account_name>

            3) <organization_name> . <Account_name>

    Let's understand more about organization Name and Account Name -

a) Organization Name: It is the name chosen by the customer 

        * Organization name must be unique across all snowflake organizations.

        * It can contain uppercase letters 

        * It can contain letters 

        * It should not contain underscore or any other special characters 

       * The organization name can be changed but has more complications.

b) Account Name: It is the name created by the customer.

       * Account Name must be unique in an organization

       * Account Name is not unique across snowflake organizations.






E) Identifying Account using Account locator in Region 

             The account locator is an identifier snowflake assigns at the time of account creation.

             Customers can provide specific value to the account locater if it is created through a service representative else it will generate with random strings.

            As each snowflake account is hosted on cloud platform in specific region , the account locator requires region id and cloud platform provider details in order to identify account using Account locator 

           Format using Account locator is 

        1) <account _locator> . <region_id>      or 

       2) <account_locator> . <region_id> . <cloud>


       Learn more about snowflake here 



                       

  






















































/

Thursday, January 20, 2022

What is Virtual Private Snowflake ?

                 Are you looking for details of Virtual Private Snowflake or VPS? Are you also interested in knowing how to use an account locator for a VPS account? If so, then you reached the right place. In this article, we will learn about Virtual Private Snowflake.


A) What is Virtual Private Snowflake?

                  Snowflake comes in various editions such as Standard Edition, Enterprise Edition, Business Critical Edition etc. Virtual Private Snowflake or VPS is another snowflake edition.

                  Virtual Private Snowflake provides the highest level of securities for organizations that have strict security requirements e.g Financial Institutions. It helps great help while dealing with sensitive data for analyzing and sharing such sensitive data.

                  Virtual Private Snowflake (VPS) comes with all the services and features of Business Critical Edition. However, all these Functionalities come in a completely separate environment i.e Isolated from all other snowflake accounts. In other words, the VPS accounts do not share resources with accounts outside VPS.






B) Account Identifier for VPS account 

                 The account identifier for the VPS account is different than other snowflake editions. As we know the other snowflake editions have specific formats for Account Identifier, you can click here for more details. As VPS uses a different structure for hostnames and URLs hence the format for VPS account identifier is also different. In order to get a specific format for the VPS account, we need to reach out to the snowflake support team.

                 However, we have another alternative option that we can use for VPS account Identifier and that format is 

                     <organization Name > _ <account Name>


C) Does VPS support secure Data sharing?

                 The answer is NO . the VPS or Virtual Private Snowflake currently does not support secure data sharing. On another hand, Enterprise and standard edition support secure Data sharing.


D) What are the important features of VPS?

                 Along with features of Business Critical Edition, the edition provides the following features 

                1) Use of  Tri -secret service for customer-managed encryption keys.

               2) Support for private connectivity to snowflake services using Azure Private Link, AWS                           Private Link, Google Cloud  Private service connect. 

               3) Dedicated metadata store and pool for compute resources 

               4) Support for FedRAMP for US government regions.


              Learn more about snowflake here -




Sunday, January 16, 2022

Snowflake interview questions and answers - Part III

                       Are you preparing for your snowflake interview? Are you looking for Snowflake interview questions and answers? If so, then you reached the right place. In this article, we will focus on snowflake caching interview questions and answers. You can visit the previous article on Snowflake interview questions - Part II here.


                                                                                                            



Q.1  Explain caching in Snowflake ? or How caching works in Snowflake?

                        Snowflake provides caching at two levels - one at the cloud services layer and the second one at compute level. When we execute SQL query against Snowflake. the result from the cloud services layer will be fetched. If cloud services layer cache is disabled then compute layer level cache will be used.

                        The important thing to note here is that cache scenarios work only when the time of execution falls within AUTO_SUSPEND specified time.






Q.2  How does cache work if the underlying table gets updated?

                         As we know when we submit a SQL query to a virtual warehouse in Snowflake, it gets executed against database storage and results are returned back to the cloud services layer. Apart from it, the data gets cached at compute layer and cloud services layer when we execute the same query again the result will be returned using cache.

                        Now let's assume that we either updated the underlying table by deleting a record or updating a record. After updating the table, if we run the same query again the cache will not be used. Instead, the virtual warehouse will be connected to database storage to fetch the data.


Q.3  Is it a good idea to run a select query after updating the table and after deleting the record separately?

                        When we update the table by deleting or updating a record then such change is made in the storage layer. that means, if we execute a query against Snowflake, it needs to connect the storage layer to fetch the latest data. Whenever we connect database storage through compute layer it incurs expenses. so it makes sense to perform all your DML operations such as delete, update as a single unit of work, and then query the results instead of making separate select query calls. This will be helpful to achieve cost-effectiveness.


Q. 4  Does the user's cache share across multiple users?

                         Assume user 1 executes a SQL query against snowflake using virtual warehouse VM1 and after successful execution, if the same SQL query is executed by user 2 with vm2, in such case will cache be used for user2 query execution? the answer is YES As long users use the same or different virtual warehouse and time of execution are under Auto_Suspend timeframe then cache from either result cache or local disk cache will be used.


Q. 5  How long the query results will be cached? 

                         The query results are retained for 24 hrs from last the time of execution. Result cache layer holds queries for 24 hrs. 


Q.6  How much snowflake charges for storing cache? 

                         Snowflake does not charge for storing cache.






Q. 7  Explain more about Remote disk?

                         The remote disk is nothing but database storage which can be achieved using cloud providers like AWS S3. The remote disk is nothing but the blob storage area


Q. 8  How to disable local disk cache 

                     We cannot disable the local disk cache.


Q. 9  How to disable Result cache 

                      In order to disable Result cache use statement below -

                    ALTER SESSION SET USE_CACHED_RESULT=FALSE ;


Q. 10 Can we use the Result cache even if we suspend the warehouse?

                        The answer is YES. We can use the Result cache when the virtual warehouse is in a suspended or Inactive state.


                     Learn more about snowflakes here...



Sunday, January 2, 2022

What is difference between Star and Snowflake schema ?

                    Are you looking for the details about star schema and snowflake schema? If so, then you reached the right place. In this article, we will see details about it along with the differences between the Star & Snowflake schema.

A) What is star schema? 

                    The star schema is the simplest schema used to develop dimensional data marts and the data warehouse star schema consists of one or more fact tables referencing the multiple numbers of dimension tables.

                    The star schema separates business process data into facts and dimensions facts holds measurable, quantitative data on the other hand dimensions provides descriptive attribute related to fact data.

                   1) Benefits:- The star schema is denormalized and the benefits of star schema are the queries are simple, business logic reporting is simplified, better query performance,                                         improved performance for aggregation operation.

                   2) Disadvantages:- star schema is not flexible for complex analytical needs. It does not support many to many relationships. The data integrity is not well enforced due to the denormalized state.








B) Snowflake schema 

                   The Snowflake schema is a logical arrangement of tables in a multidimensional database with entity relationships that resemble a snowflake shape.

                  Snowflake schema has centralized fact tables and those are connected to multiple dimensions.

                  The snowflake schema is similar to the star schema but in the Snowflake schema dimensions are normalized into multiple related tables.

                 1) Benefits:- Below are the benefits of snowflake schema Better storage savings due to normalization. optimization of some OLAP database models with snowflake schema.

                 2) Disadvantages:- Complexity in SQL query due to normalization. Data loads into the snowflake schema must be highly controlled and managed.








C) Difference between snowflake and star schema 

                   The snowflake and star schema are similar in nature. However, the Snowflake schema is normalized for some dimensions. On other hand, the logical dimensions are denormalized in the star schema.


                             Learn more about snowflake here 



Monday, December 27, 2021

What is Cold and Hot virtual warehouses ?

                            The virtual warehouse in snowflake goes through various states. The states are cold warehouse, warm warehouse, and Hot warehouse. In this article, we will learn about what does each state of virtual warehouses in snowflake. We will also see what impact of states of virtual warehouses on query performance.

A) What is a virtual warehouse in snowflake?

                            The virtual warehouse in snowflake is a cluster of computing resources that are used to perform activities such as DML operations and SQL execution. The compute resources consist of temporary storage, memory, and CPU. The majority of expenses occur due to the use of a virtual warehouse or compute layer.





B) What are the states of the virtual warehouse in snowflake?

                             The virtual warehouses go through various states and these are -

                            a) Cold virtual warehouse 

                            b) Warm virtual warehouse 

                            c) Hot virtual warehouse 

                      In the next section, we will more about all these states of the virtual warehouse.

C) Cold virtual warehouse :

                             Starting a new virtual warehouse without using local disk cashing and executing the query is called a cold virtual warehouse. When our virtual warehouse is suspended or not active and if we run the query then it will start a new instance of the virtual warehouse.

D) Warm virtual warehouse : 

                             It is the state of the virtual warehouse during which the virtual warehouse is active and running for a while and processed queries. Assume that, the virtual warehouse is in a warm state and we disable the result cache and execute the query then it will use local disk caching. This caching is called warm caching.





E) Hot virtual warehouse :

                            The hot virtual warehouse is a state of virtual warehouse during which the virtual warehouse is active and running for a while and processed queries. However, in this case, the result cache is on. If we execute the query in this state, the query result is returned from the result cache. This is the most efficient operation.

F) Impact of virtual warehouse state on query performance 

                   1) Cold warehouse 

                                The query takes longer than a warm and hot virtual warehouse. It uses a remote disk. Local disk cache is not used. Result cache is not used.

                   2) Warm warehouse 

                               The query takes lesser than a cold warehouse but more than a hot virtual warehouse. It does not use a remote disk, however, it uses a local disk. It does not use result cache.

                  3) Hot virtual warehouse 

                               The query takes lesser time for execution than a cold virtual warehouse and a warm virtual warehouse. It does not use both remote disk and local disk cache. The result is returned using the result cache. from the cloud services layer. It is the most efficient way of getting the result of the query.


                         Learn more about snowflake here -



Thursday, December 23, 2021

Snowflake Interview Questions and Answers - part II

                      This is the second part of the series of snowflake interview questions and answers. For the first part i.e Snowflake interview questions and answers - part I click here .




Q1. What are the types of data warehouses? What type of snowflake data warehouse is it? 

                       Before understanding the types of data warehouses, we need to know what is a data warehouse? A data warehouse is a central data repository used for data analyses and reporting .

                       Following are the types of data warehouses

                  a) Enterprise Data Warehouse (EDW) which is a centralized warehouse used for decision making across enterprises. EDW is used for tactical and strategic decision purposes.

                 b) Operational Data Store (ODS) which is a centralized database that is the complementary element to EDW and often acts as a source to EDW, ODS gets refreshed in real-time and used for operational reporting and decision making.

                 c) Data Mart is a subset of a data warehouse and is normally used by a specific team or business line.

                    Snowflake is an analytic data warehouse i.e can be used as Enterprise Data Warehouse and it is implemented as a software As A service i.e SaaS service.






Q2. Is it possible to use data from the local system to load in Snowflake?

                   No, we can not load from the local system we need to use Amazon S3 bucket or Microsoft Azure BLOB, or Google cloud storage.

Q3. What are the important features of Snowflake? 

                  The listed below are the features of the snowflake -

                1. Database and object closing

                2. External Table 

                3. Geospatial data support 

                4. XML support

               5. Cashing

               6. Search optimization services

               7. Integration with Hive meta store 

               8. Data protection and security 

               9. Time Travel

             10. Data sharing

Q4. Can we use an external database such as Oracle or  DB2 for Snowflake storage layer ?

                     No, we can not use an external database for snowflake storage layer snowflake comes with an inbuilt database which is built on SQL database. It is a columnar stored relational database. The snowflake database works well with Tableau, Extel, and many other tools. Snowflake database provides all the services which come with SQL database such as role-based security, query tool, multi-statement transactions, etc.





Q5. What are the cashing areas in snowflake architecture? 

                     The data fetched from the storage layer is cashed at two locations  1) Compute layer 2) Cloud services layer 

                     If cloud services layer cashing is disabled then compute layer cashing is used


                  

               Learn more about snowflake here 



     

       

                               

Friday, December 10, 2021

Snowflake Interview Questions and Answers - Part I

                 This is the first article on a series of Interview Questions and their answers on Snowflake. Through these questions and answers we will learn more about Snowflake so let's start.




Q 1 . What is the architecture of Snowflake?

                  Snowflake architecture is a hybrid architecture of shard-disk and shared-nothing database architectures. As like shared-nothing architecture, snowflake processes queries using massively processing compute clusters where each node store some portion of data locally. on other hand, as like shard-disk architecture, snowflake uses a central repository for data & it is accessible from compute nodes.

                 Snowflake architecture has three layers i.e database storage to store data, Query processing or compute layer for processing queries and the third layer is cloud services which provide services such as security, metadata, and optimizer.








Q2. What are cloud platforms are supported by Snowflake?

                  Snowflake supports the following cloud platforms -

                 1. Amazon Web Services (AWS)

                 2. Google cloud platform (GCP)

                 3. Microsoft Azure ( Azure)


Q3. Do you consider snowflake as an ETL tool? 

                Yes, snowflake can be considered an ETL tool because it performs extract, transform, and loads operations like other ETL tools.

                a) Extract Process: With help of this process, snowflake extracts data from the source and creates data files. these data files support various data formats like CSV, XML, JSON, etc.

                b) Transform Process: We can write a custom transform process in snowflake to pull data from the source and apply some transformations for cleansing and standardization and then load the data.

               c) Load Process: With the load process, we can load data to the internal or external stage. we can use Microsoft  Azure Blob, Amazon S3 bucket or snowflake managed location for staging data. The data is copied to snowflake storage using the COPY INTO command.






Q4. What are the different Editions of Snowflake?

               Here is list of snowflake Editions 

             a) Standard Edition 

             b) Enterprise Edition 

             c) Business -critical Edition

             d) Virtual Private snowflake 


Q5. What kind of SQL does snowflake use?  

             Snowflake uses ANSI SQL which is a common standard version of SQL.


     

         

      

            


        Learn more about snowflake here 


Tuesday, November 23, 2021

Why snowflake is leader ?

                  Do you know why snowflake is one of the leading cloud data warehouse platforms available in the current market? Are you also aware that how Snowflake has evolved over the period of time and still evolving and replacing traditional data warehouses? We are going to understand all these things in this article.


Introduction:

                  In this article, we are going to see the golden age in which we are living. we will also see the design for the traditional data warehouse. Then we will see how snowflake evolved over the number of years and then finally we will see what's making snowflake a leader in the cloud data warehouse market.


Golden Age:

                 Let's start with the Golden age. Currently, we live in the golden age of distributed computing. The public cloud platforms such as Amazon web services, Google cloud platform, or Microsoft Azure provides unlimited storage and compute resources and these resources are available on demand. Because of that only end-user can Enterprise-class experience for systems or applications with help of software as a service or SAAS model. For this experience, we do not have to spend a lot of money. These services are cost-efficient and perform well. Cloud Dataware leverages these features but not the traditional data warehouse.





                  Before going to see what are the drawbacks of the traditional data warehouse implementation. Let's have a look into the design of the traditional data warehouse. As we can see in this screen, the traditional data warehouse has multiple layers, Those are the data source layer, staging layer, warehouse layer, data mart layer, and client layer.



                  The data source layer brings the data from various sources such as Salesforce, CRM, Human Resources, etc. such data is stored in a traditional database or flat-file format, The ETL i.e. Extract transform, the load is implemented to pull data from source systems and push to staging layer. After performing cleansing standardization data is then loaded from the staging layer to the data warehouse. Along with Raw data, we also store metadata and a summary of data in the data warehouse. Finally, this data is published to the data mart. The data mart night be for sales, inventory, or for purchasing. On top of this layer data mart, the client layer will be present. The business users or business analysts will perform various operations in order to carry out in-depth data analysts, prepare the reports and perform data mining. All these users will connect to multiple data marts for their needs.

                   As we can see this traditional data warehouse model is complex and resource extensive traditional data ware model is designed considering the fact that it will deal with fixed resources. That was true earlier but with evolution technologies, social media, and advancement in the sector fixed resources design is no more relevant. We deal with a variety of data coming with different speeds and formats. Traditional data warehouses face challenges in managing all these aspects of data.

                 Another aspect of a traditional data warehouse is investment cost. we need to invest a big chunk of money in the early stages of data warehouse implement which is not the case with the cloud data warehouse.

                Complex ETL pipelines are another drawback of the traditional data warehouse. As we can see we need to build multiple pipelines to push data from data source to staging layer, staging layer to the data warehouse, and data warehouse to data mart. Adding flexibility to this flow is a very challenging thing for the traditional data warehouse. Hence snowflake comes into the picture. 

Snowflake Evolution:

We are going to see various things about snowflakes but before going to see features and the advancement in the snowflake, let's understand a few things related to snowflakes. currently, snowflake supports three cloud platforms. And those are amazon web services, google cloud platform and Microsoft azure snowflake supports various regions across the world and those are north America, Europe, Asia specific. Each of these regions is supported by respect cloud platforms i.e. Amazon web services, google cloud platform and Microsoft azure.




                Let's have look at how snowflake evolved over the period of time snowflake was founded in the year 2012 and was published in Oct 2014. In the same year, it come with the Amazon S3 platform once it become more stable, it introduced Microsoft azure cloud in 2018, and in this year 2019 Google cloud platform was introduced. As we can see, within a short period of time this product has evolved a lot. It will support three major cloud service providers, i.e. Amazon, Google, and Microsoft. As it progresses, it will support many more cloud providers in the future. Snowflake was number one rank in cloud 100 in this year 2019. Snowflake is one of the leading tools in the cloud data warehouse market.





                So, what makes snowflake a leading platform? The critical aspect about snowflake is it segregated storage and compute layer traditional data warehouse either support shared-nothing architecture or shared disk architecture. On other hard snowflake brought a hybrid approach on the table with benefits from both shared-nothing & shared disk approach. Apart from it, the snowflake is a pure software as a service, product .i.e users don't have to worry about software installation, administration, product upgrades, etc. It also supports ASSI SQL and ACID transactions. semi-structured data which is difficult to manage with a traditional data warehouse is easily managed & maintained using a snowflake cloud data warehouse. The Elastic storage of computing resources can be scaled independently and seamlessly. That's a very critical aspect brought by a snowflake. It is highly available and it is durable. And of course, it is cost-efficient snowflake is also working on improving cost efficiency furthermore. Last but not least snowflake is secure and comes with end-to-end encryption.

              Because of all those features snowflake is the leader in the cloud data warehouse.




Sunday, November 21, 2021

What is virtual Warehouse in Snowflake ?

                  Are you looking for details virtual warehouse? Are you also interested in knowing what role virtual warehouse plays in snowflakes? If so, then you reached write place, in this article we will learn about virtual warehouses in snowflake in detail.


A) What is Virtual Warehouse? 

                  Before knowing about virtual warehouse we need to know what is EC2? EC2 is also known as Elastic compute cloud which is a web service that provides secure, resizable compute capacity in the cloud. Now, we know what is EC2, Let's understand what is the virtual warehouse.

                  The virtual warehouse is an important layer in snowflake architecture and it consists of clusters of EC2  instances. The virtual warehouse is an abstraction by which each cluster is presented to a single user.






B) What is a worker node in the virtual warehouse? 

                   As we know virtual warehouse consists of a cluster of EC2 instances. The individual EC2 instance is called a worker node that performs given tasks. End-user never interacts with worker node. When users perform any action which involves virtual warehouse processing during such time users do not know how many worker nodes are working on and how they are performing tasks in the warehouse.


C) What are the virtual warehouse sizes? 

                   Virtual warehouses come with T-shirt sizes .currently available VW size are 

                   i) x- Small                                               vi ) 2x - Large 

                  ii) Small                                                   vii) 3x - Large 

                 iii) Medium                                              viii) 4x - Large 

                iv)  Large                                                     ix) 5x - Large 

                 v) x - Large                                                x) 6x - Large






D) Elasticity and Execution Engine 

                   Virtual warehouse comes with two important concepts Elasticity an Execution Engine . The VWs  are compute resources and these can be created , resized and destroyed at any point of time . This feature is Elasticity and has no effect on the state of persistant store or database .

                   The execution engine is implemented by a snowflake and it is a SQL execution engine. This engine is built based on the below features 

                        i) Columnar

                       ii) Vectorized 

                      iii) Push-based 


                     Learn more about snowflake here  

    


        

Saturday, October 16, 2021

What is Time Travel in Snowflake ?

                        Are you looking for details about Time Travel in Snowflake? Are you also interested in knowing what are tasks we can perform using Time travel feature? If so, then you reached the right place. In this article, we will learn one of the powerful features is Snowflake.


A) What is Time Travel in Snowflake

                        The feature by which we can access historical data at any point within a specified period is called Time Travel in snowflake we can access data not only changed but deleted as well.


B) What are the tasks that can be performed using Time travel in Snowflake?

                      The tasks below can be effectively performed by using Time Travel Feature 

             1.  Backing up the data from key points in the past.

             2. Duplicating the data from key points in the past.

            3. Restoring tables, schemes, and databases if those are accidentally deleted.






C) What is Data Protection Lifecycle? 

                  In snowflake, there are three-phase of the data protection lifecycles. 

           1. Current Data Storage: on the current data set we can perform standard operations such as DML , DDL etc.

           2. Time Travel Retention: The normal retention period is 1 to 90 days. Here is the list of operations allowed with time travel.

          a) SELECT .... AT| BEFORE ...

          b) CLONE ... AT|BEFORE ...

          C) UNDROP...

          3. Fail safe: This is the last phase in Data Protection Lifecycle. This can only be performed by snowflake No user operations are allowed.


D) Data  Retention Period in snowflake 

                In snowflake, Data Retention Period is a key component for Time Travel. The Data retention period specifies the period or number of days we can preserve data. Snowflake Preserves the state of data before update /delete/drop. 





               For Snowflake Standard  Edition, the Data Retention period is one day.

              For Snowflake Enterprise  Edition, Data Retention Period between 0 to 90 days.


            Learn more about Snowflake here -



           

Sunday, October 3, 2021

How to handle NULL values in Snowflake

                   Are you facing challenges while handling NULL values in snowflake? Are you also interested in knowing what are the things we need to consider while columns in the query contain NULL values? In this article, we will learn details about handling NULL values.


A) What are NULL values?

                   Many times NULL value is referred to as no value. some systems do not have NULL concepts. In technical terms, the NULL value is a reference to an empty area of the memory. some systems handle differently than others. Snowflake has its own way of handling NULL values.






B) What are the rules for handling NULL values in Snowflake?

                  Here is the list of rules  -

             1. Not null to null value comparison :

                   we compare not null value with the null value it returns NULL

                   e.g  'ABC ' =NULL returns NULL

             2. Equality or inequality comparison :

                   If we use inequality comparisons such as less than or greater than it results in a NULL                              value.

                   e.g 'ABC' > NULL returns NULL

             3. Comparision of NULL values :

                   If we compare one NULL value with another NULL value then it results in a NULL value.

                   e,g . NULL= NULL results NULL

             4. Best way of comparing NULL values is 

                     IS NULL or IS NOT NULL

             5. Function to null safe equality 

                  We can use the EQUAL_NULL function to check NULL safe equality.

                  e.g. equal_null ('ABC' , null) will return false.





             6. Aggregate function dismisses NULL values.

                  Assume that one of the fields in the snowflake contains the below values -

                  1,2, NULL , 3, NULL 6, 1

                 If we use aggregate function 4 AVG on this field then it will return the result as                                              3 i.e(1+2+3+6)/4

                 i.e. It ignores NULL values.

             7. Real average value 

                  In case we need a real average in the above example we need to use the SUM function, where the NULL value is 0

                  i.e. SUM(<value>)/count (*)

                 e.g. sum(1+2+0+3+0+6)/6

                  i.e. 12/6 = 2

             8. The empty string in snowflake is NOT NULL

                 e.g. "IS NULL will returns false.

             9. Count (*) returns the total count of rows in the table 

            10. Count (<column_name>) returns the count of rows with not null values only


                                  Learn more about snowflake here -



Tuesday, September 28, 2021

What are the components of snowflake architecture ?

                Are you looking for an article on snowflake architecture? Are you also looking for the components of snowflake architecture? If so, then you reached the right place. In this article, we will explore database storage, Query processing, and cloud services in detail.

A) What is the Architecture of Snowflake?

               Snowflake architecture is a hybrid of a shared-nothing database and shared disk.

       1. Snowflake uses a central data repository which is similar to shared-disk architecture.

       2. Snowflake processes queries using massively parallel processing compute clusters. In this kind of architecture each node in the cluster, stores a portion of the entire data set. This is similar to shared-nothing architecture.





B) What are the components of Snowflake Architecture?

              The components of snowflake architecture are as below 

        1. Database Storage

        2. Query Processing 

        3. Cloud Services 

                 Let's understand each of these components one by one

1. Cloud  Services 

               It is the topmost layer in snowflake architecture. It consists of a collection of services that coordinates various activities across the Snowflake platform. The cloud services join various components of a snowflake in order to fulfill requests such as login or giving a response back to the user.

               Here is the list of services that are handled in this layer.

         1. Authentication 

         2. Infrastructure Management 

         3. Metadata Management 

         4. Query Parsing 

         5. Query optimization 

         6. Access Control

2. Query Processing 

               In this layer, query execution is handled. It is the most common and widely used component of the snowflake.  The queries are processed using a virtual warehouse. Each virtual warehouse is massively parallel processing compute cluster. It consists of multiple compute nodes provided by snowflake from the cloud provider.





3. Database Storage 

              It is cloud storage where optimized data is stored. What is optimized data? The optimized data is nothing but the data which is reorganized by snowflake into the compressed and columnar format.

             What are aspects handled by snowflake related to data? here is a list which is taken care of by snowflake 

         1. File Size 

         2. Structure of the data 

         3. Compression of the data 

         4. Metadata

         5. Statistics of the data

         6. Organization of the data


 The important thing here is the data stored by snowflake is not visible or accessible directly by customers. It can only be accessed using SQL query operations.


                 Learn more about snowflake here 



Tuesday, July 27, 2021

Top 10 commonly used commands in Snowflake

        Are you looking for details about commonly used commands in Snowflake? Are you also interested in knowing what are DML, DDL snowflake commands? Then you have reached the right place. In this article, we will explore more about snowflake commands.

A) Types of commands in Snowflake

          There are four types of commands present in snowflake and those are 

          1. DDL - Data Definition Language

          2. DML - Data Manipulation Language

          3. DCL - Data Control Language

          4. TCL - Transaction Control Language

         In this article, we will focus on DDL and DML commands use in Snowflake






B) What commonly used DDL commands in Snowflake?

            Here is the list of DDL commands used in snowflake

           1. ALTER

           2. CREATE

           3. DROP 

           4. USE

           5. SHOW


           Let's see each of these commands one by one -

1. AFTER - AFTER command is used to modify metadata of on account level, parameters for session or metadata of the database object

Syntax : 

              AFTER <object_type> <object_name> <actions>

e.g  AFTER SESSION SET < params >


2. CREATE - CREATE command is used to create new object

Syntax : 

              CREATE <object_type> <object_name>

e.g.  CREATE  DATABASE ABC


3. DROP - DROP command is used for removing object from system.

Syntax : DROP <object_type> [IF EXISTS] <identifier>

e.g  DROP USER [IF EXISTS] abc_ user


4. USE - USE command is used to specify role, warehouse database ,or schema for current session

Syntax : USE WAREHOUSE <name>

e.g   USE WAREHOUSE xyz 


5. SHOW - SHOW command is used to provide metadata for the object.

Syntax : SHOW <object_ type_plural> [LIKE '<pattern>']

e.g  SHOW PARAMETERS [LIKE '<pattern>']






C) What are commonly used DML commands in Snowflake?

         Here is the list of DDL commands used in Snowflake

          1. INSERT

          2. MERGE

          3. UPDATE

          4. DELETE

          5. TRUNCATE

             Let's see each of these commands one by one.


1. INSERT - INSERT command is used to insert one or more rows into the table.

       Syntax :

       INSERT INTO <table_name> [< column_name>]

           VALUES (<value>|DEFAULT|NULL,...)

        

       e.g

        INSERT INTO TAB_ABC  (id, name)

         VALUES ( 100 , ' DRONA')

2) MERGE - MERGE command is used to Insert, delete and update values in a table based on values in a subquery or another table.

         Syntax : 

         MERGE INTO <table_name> USING <source> ON <join_exp>L

         e.g

         MERGE INTO TAB_ABC USING TAB_PQR ON TAB_ABC.ID=TAB_PQR.ID WHEN MATCHED Then 

         Update set TAB_ABC .NAME = TAB_PQR.NAME


3. UPDATE: The UPDATE command is used to update rows in the table.

       Syntax : 

              UPDATE <table_name> SET <field>=<value>

       e.g 

            UPDATE TAB_ABC SET NAME = 'XYZ'


4. DELETE : DELETE command is used to delete records from the table.

         Syntax : 

              DELETE FROM <table_name > WHERE <condition>

          e.g 

              DELETE FROM TAB_ABC WHERE NAME='BOB'


5) TRUNCATE: TRUNCATE command is used to remove records from the table including privileges and constraints.

          Syntax :

              TRUNCATE TABLE <table _name>

          e.g 

              TRUNCATE TABLE TAB_ ABC.







Understanding Survivorship in Informatica IDMC - Customer 360 SaaS

  In Informatica IDMC - Customer 360 SaaS, survivorship is a critical concept that determines which data from multiple sources should be ret...