Batch :E8 Case study on Big University Suppose that a data warehouse for Big Big University University consists of the following four dimensions: student, course, course, semester , and instructor , and two measures count and count and avg grade. grade. When at the lowe lowest st conc concep eptu tual al leve levell (e.g (e.g., ., for for a give given n stud studen ent, t, cour course se,, seme semest ster er,, and and inst instru ruct ctor or combination), combination), the avg grade measure stores the actual course grade of the student. At higher conceptual levels, avg grade stores the average grade for the given combination. (a) Draw a snowf a snowflake lake schema schema diagram for the data warehouse. (b) Starting with the base cuboid [ student; student; course; course; semester; semester; instructor ], ], what specific OLAP operations (e.g., roll-up from semes from semester ter to to year year ) should one perform in order to list the average grade of CS of CS courses courses for each Big each Big University University student. (c) What is a staging area? Do we need it? What is the purpose of a staging area?
Problem 4: (25 points) Do problem 3.4 on page 152 Suppose that a data warehouse for Big University University consists of the following four dimensions: student, course, semester, and instructor, and two measures count and avg_grade . When at the lowest conceptual level (e.g., for a given student, course, semester, and instructor combination), combination), the avg_grade measure stores the actual course grade of the student. At higher conceptual levels, avg_grade stores the average grade for the given combination. (a) Draw Draw a snowflake snowflake schema schema diagram diagram for the the data warehous warehouse. e.
(b) Starting with the base cuboid [ student, course, semester, instructor ], what specific OLAP operations (e.g., roll-up from semester to year) should one perform in order to list the average grade of CS courses for each Big University student. (c) If each dimension has five levels (including all), such as “ student < major < status < university < all ”, how many cuboids will this cube contain (including the base and apex cuboids)? Solution: (a)
(b)
Starting with the base cuboid [ student, course, semester, instructor ] 1. roll-up on course from (course_key) to major 2. roll-up on student from (student_key) to university 3. Dice on course, student with department =”CS” and university=”Big University” 4. Drill-down on student from university to student name (c) The cube will contain 54=625 cuboids.
Problem 3: (25 points) Do problem 3.3 on page 152. Suppose that a data warehouse consists of the three dimensions time, doctor, and patient, and the two measures count and charge, where charge is the fee that a doctor charges a patient for a visit. (a) Enumerate three classes of schemas that are popularly used for modeling data warehouses. (b) Draw a schema diagram for the above data warehouse using one of the schema classes listed in (a). (c) Starting with the base cuboid [ day, doctor, patient], what specific OLAP operations should be performed in order to list the total fee collected by each doctor in 2004? (d) To obtain the same list, write an SQL query assuming the data are stored in a relational database with the schema fee (day, month, year, doctor, hospital, patient, count, charge).
Solution: (a) star schema: a fact table in the middle connected to a set of dimension tables snowflake schema: a refinement of star schema where some dimensional hierarchy is normalized into a set of smaller dimension tables, forming a shape similar to snowflake. Fact constellations: multiple fact tables share dimension tables, viewed as a collection of stars, therefore called galaxy schema or fact constellation. (b) As figures below (c) Starting with the base cuboid [ day, doctor, patient], what specific OLAP operations should be performed in order to list the total fee collected by each doctor in 2004? 1. roll up from day to month to year 2. slice for year = “2004” 3. roll up on patient from individual patient to all 4. slice for patient = “all” 4. get the list of total fee collected by each doctor in 2004 (d) Select doctor, Sum(charge) From fee Where year = 2004 Group by doctor
4. Suppose that a data warehouse for Big University consists of the following four dimensions: student, course, semester, and instructor, and two measures count and avg _grade. When at the lowest conceptual level (e.g., for a given student, course, semester, and instructor combination), the avg_ grade measure stores the actual course grade of the student. At higher conceptual levels, avg _grade stores the average grade for the given combination. (a) Draw a snowflake schema diagram for the data warehouse. P116 答 : Big university are considered along four dimensions, namely, semester, student ,course and instructor. The schema contains a central fact table for Big –University that contains keys to each of the four dimensions, along with two measures: count and avg_grade .
semester dimension table
student dimension table
semester _key quarter year Big_university fact table
semester _key course _key student _key instructor _key count avg_grade
course dimension table course _key course _number course _name Property credit
student _key student _ No. name age sex class major _key
major dimension table major _key major _type
instructor dimension table instructor _key name age office _key
office dimension table office _key Office _telephone office _address
Figure3.4 Snowflake schema of a data warehouse for Big _university (b) Starting with the base cuboid [student, course, semester, instructor], what specific OLAP operations (e.g., roll-up from semester to year) should one perform in order to list the average grade of CS courses for each Big University student. 答 : Starting with the base cuboid [student, course, semester, instructor],we use the following specific OLAP operations in order to list the average grade of CS courses for each Big University student. Roll-up: The roll-up operation performs aggregation on a data cube, either by climbing up a concept hierarchy for a dimension or by dimension reduction, This hierarchy was defined as the total order “quarter
Slice and dice: The slice operation performs a selection on one dimension of the given cube, resulting in a subcube. (c) If each dimension has five levels (including all), such as “student < major
Total number of cuboids=∏ (Li + 1), where Li is the number of levels associated with dimension i 1 =
i. One is added to Li in Equation to include the virtual top level, all. So , he cube has 4 dimensions and each dimension has 5 levels (including all), the total number of cuboids that can be generated is: 54=625
www.scribd.com/doc/6756591/Snowflake-Gau www.scribd.com/doc/43505352/Chapter-3
What is a staging area?Do we need it?What is the purpose of a staging area? Staging area is place where you hold temporary tables on data warehouse server. Staging tables are connected to work area or fact tables. We basically need staging area to hold the data , and perform data cleansing and merging , before loading the data into warehouse.In the absence of a staging area, the data load will have to go from the OLTP system to the OLAP system directly, which in fact can severely hamper the performance of the OLTP system. This is the primary reason for the existence of a staging area. In addition, it also offers a platform for carrying out data cleansing.
The Data Warehouse Staging Area is temporary location where data from source systems is copied. A staging area is mainly required in a Data Warehousing Architecture for timing reasons. In short, all required data must be available before data can be integrated into the Data Warehouse. Due to varying business cycles, data processing cycles, hardware and network resource limitations and geographical factors, it is not feasible to extract all the data from all Operational databases at exactly the same time. For example, it might be reasonable to extract sales data on a daily basis, however, daily extracts might not be suitable for financial data that requires a month-end reconciliation process. Similarly, it might be feasible to extract "customer" data from a database in Singapore at noon eastern standard time, but this would not be feasible for "customer" data in a Chicago database. Data in the Data Warehouse can be either persistent (i.e. remains around for a long period) or transient (i.e. only remains around temporarily). Not all business require a Data Warehouse Staging Area. For many businesses it is feasible to use ETL to copy data directly from operational databases into the Data Warehouse.