UPDATE: I gave a WebEx presentation at Tableau's Think Data Thursday group about how I built this. You can view the recording here: 
https://tableausoftware.webex.com/tableausoftware/lsr.php?RCID=3d66e85d93488ccbb47b21aab286cdcb

About a month ago Tableau asked me to do a guest blog post about my background. People want to know what it takes to become a Tableau Zen Master. The qualifiers of a TZM are being a master of Tableau's product, someone who engages in the Tableau community as a teacher, and someone who is innovative in the way they use Tableau. There are a lot of very talented people who contribute heavily to the Tableau community, so I feel very honored to be named among the thirteen people who are currently Tableau Zen Masters.

I wanted to do something unique for the blog post about my background. Why not use Tableau to visualize my career progression? Below is what I came up with. My hope is that you'll be surprised that you can make stuff like this in Tableau. Click the image to view the interactive version. 



This one took quite a while to complete. Every mark you click on is interactive and has audio narration. Most of them also have content that appears in the "showcase" section. A large part of the development time was spent writing the story, recording it, and finding images that related to the story. In total I spent more than 20 hours on it. 

I drew a lot of inspiration from visual resumes I found from a Google image search. I originally sketched it out on paper to figure out the basic design.Then I used PowerPoint extensively for the graphics and to finalize the design.The audio was recorded in GarageBand and published to the web with Spreaker. I used free web hosting from biz.nf to embed the audio player. 

I'm not a professional voice actor, so some parts of the audio aren't super polished. But I had reached my threshold of patience for all the recording and perfect can be the enemy of good. In total there are 23 minutes of audio embedded in the viz. I hope you enjoy the story and look for my guest blog post on Tableau Public in the near future. 

This viz also serves a dual purpose. I'm hoping it is good enough to land me a spot in the Iron Viz competition in Seattle. I'm submitting it for the quantified self feeder competition. Wish me luck!

2021 Wk 7

Challenge: https://preppindata.blogspot.com/2021/02/2021-week-7-vegan-shopping-list.html

The 2021 Week 7 #PreppinData challenge introduces the use of the GROUP_CONCAT() and INSTR() functions in Exasol and expands on the use of scaffolding techniques to unpivot data contained in a single field with delimited values. I also used CTEs, regular expressions, and built a view to avoid code repetition for this challenge. The full SQL solution is posted below.

Lines 1-23 create our table structures and load the data into Exasol. Refer to the first post in this series for more about these DDL / DML statements. The bulk of the work is saved in a view that starts on line 25 so that it can be reference to generate the two outputs which are just pulling from the view with different filters. This is a good practice so that you don't repeat code multiple times for different uses and later have version control issues when minor changes fork from the original code.

The view contains a couple of common table expressions (CTEs) that manipulate the keyword dataset. The challenge with this one is that the list of keywords exist in two discrete columns as a comma separate list (shown below). The first column has ingredients and the second column has E numbers used to identify food additives. These two lists are not related, so they ultimately need to be concatenated. In retrospect I probably could have eliminated the second CTE by concatenating the two fields in the first, but I'll just explain the steps as I did them. 


The first CTE on lines 26-33 named "keyword_tall_pass1" converts the two comma separated lists into one record per value as shown below. This is accomplished by cross joining to a statement that uses the CONNECT BY hierarchical query functionality that generates 30 records for us on line 32. Thirty is just a number I chose that is large enough to capture every value in the two lists. On line 33 I drop the excess records that didn't need because there were only 16 items max between the two lists. The magic here is with the REGEXP_SUBSTR() functions. I used pattern matching to capture just alphabetic characters for the first list (line 29) or numeric characters for the second list (line 30) and kept the nth matching instance where the nth value is the RecordID value I generated on line 32. The result of this CTE is shown below. So you can see "Milk" was the first word followed by "Whey", "Honey", etc. from the screenshot above. Likewise for the second list of E numbers.


The second CTE named "nonvegan_keywords" on lines 35-38 just takes the Ingredient and ENumber columns shown above and stacks them on top of each other with UNION ALL. The ALL qualifier tells the query compiler not to bother checking for duplicate values among the two expressions. I also needed to append the letter "E" to the beginning of each number. You can concatenate string values with a double pipe operator "||". It turned out that the E numbers weren't found in the shopping list data, so none of that data was used anyway.

The final SELECT statement for the view appears on lines 41-50. This query uses the shopping list as the base table (line 48) and cross joins to the "nonvegan_keywords" CTE (line 49) so that each shopping list product's ingredients can be compared to every keyword in the list individually. I do this with the CASE statement you find on lines 46-47. Exasol is case sensitive, so I forced both the ingredients and the keywords to be lowercase and used the INSTR() function to see if an individual keyword is found in the list of ingredients. INSTR() returns the character location of the found text in a string, so if it is greater than zero I return the matched keyword. Any non-matched keywords are ignored and return NULL. 

The case statement is wrapped in a GROUP_CONCAT() function, which is an aggregate function for string data. By default it comma delimits the string data with the group, but you could choose a different delimiter. I then grouped by the Product, Description, and Ingredient fields on line 50 to get the dataset back to one line per product on the shopping list. The results are saved in a view (line 25) so I can call all this code again for my output queries.

The two output queries on lines 53-63 are very simple and largely the same. One just filters for products with a NULL value for the "Contains" field and the other for non-NULL values. This means one list is vegan and the other list is non-vegan.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
CREATE OR REPLACE TABLE DEV_MAJ."PD_2021_Wk7_ShoppingList" (
    "Product"     VARCHAR(255),
    "Description" VARCHAR(1000),
    "Ingredients" VARCHAR(2000)
);
CREATE OR REPLACE TABLE DEV_MAJ."PD_2021_Wk7_Keywords" (
    "AnimalIngredients" VARCHAR(1000),
    "ENumbers"          VARCHAR(1000)
);
IMPORT INTO DEV_MAJ."PD_2021_Wk7_ShoppingList" FROM LOCAL CSV
    FILE 'C:\Mark\Preppin Data\PD 2021 Wk 7 Shopping List.csv'
    SKIP = 1
    ROW SEPARATOR  = 'CRLF'
    COLUMN SEPARATOR = ','
    COLUMN DELIMITER = '"'
;
IMPORT INTO DEV_MAJ."PD_2021_Wk7_Keywords" FROM LOCAL CSV
    FILE 'C:\Mark\Preppin Data\PD 2021 Wk 7 Ingredients.csv'
    SKIP = 1
    ROW SEPARATOR  = 'CRLF'
    COLUMN SEPARATOR = ','
    COLUMN DELIMITER = '"'
;

CREATE OR REPLACE VIEW DEV_MAJ."PD_2021_Wk7_ShoppingListKeywords_vw" AS
    WITH keyword_tall_pass1 AS ( --generate unique rows for ingredients / e-numbers
        SELECT
            i."RecordID"
            ,REGEXP_SUBSTR(k."AnimalIngredients",'(?i)([a-z]+)',1,i."RecordID") AS "Ingredient"
            ,REGEXP_SUBSTR(k."ENumbers",'(?i)([0-9]+)',1,i."RecordID") AS "ENumber"
        FROM DEV_MAJ."PD_2021_Wk7_Keywords" k
            CROSS JOIN (SELECT level AS "RecordID" FROM DUAL CONNECT BY level < 30) i --30 is arbitrary
        WHERE local."Ingredient" IS NOT NULL OR local."ENumber" IS NOT NULL --drop null records

    ), nonvegan_keywords AS ( --stack ingredients / e-numbers
        SELECT "Ingredient" AS "Keyword" FROM keyword_tall_pass1 k WHERE k."Ingredient" IS NOT NULL
        UNION ALL
        SELECT 'E' || "ENumber" AS "Keyword" FROM keyword_tall_pass1 k WHERE k."ENumber" IS NOT NULL
    )

    SELECT --return products w/ delimited list of matching ingredients
        sl."Product"
        ,sl."Description"
        ,sl."Ingredients"
        ,GROUP_CONCAT(
            CASE WHEN INSTR(LOWER(sl."Ingredients"),LOWER(nvk."Keyword"))>0  --when ingredients contain keyword
            THEN nvk."Keyword" END) AS "Contains"
    FROM DEV_MAJ."PD_2021_Wk7_ShoppingList" sl
        CROSS JOIN nonvegan_keywords nvk
    GROUP BY 1,2,3
;

--OUTPUT 1: Vegan Shopping List
SELECT slk."Product", slk."Description"
FROM DEV_MAJ."PD_2021_Wk7_ShoppingListKeywords_vw" slk
WHERE slk."Contains" IS NULL
ORDER BY 1;

--OUTPUT 2: Non-Vegan Shopping List
SELECT slk."Product", slk."Description", slk."Contains"
FROM DEV_MAJ."PD_2021_Wk7_ShoppingListKeywords_vw" slk
WHERE slk."Contains" IS NOT NULL
ORDER BY 1;

I hope you found this exercise informative. If so, share with your friends and colleagues on your favorite social platforms.  If there is a particular #PreppinData challenge you'd like me to re-create in SQL, let me know on Twitter @ugamarkj.

If you want to follow along, Exasol has a free Community Edition. It is pretty easy to stand up as a virtual machine with the free Oracle VirtualBox platform. I use DataGrip as my favorite database IDE, which is paid software, though you can use the free DBeaver platform if you prefer.
Popular Posts
Popular Posts
About Me
About Me
Blog Archive
Labels
Labels
Other Links
Subscribe
Subscribe
Total Pageviews
Total Pageviews
1385895
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.