updated below

This is a guest post by Nick Parry. I recently gave Nick the challenge of creating a self-service way for our non-Tableau Desktop users to do fuzzy matches from data that these users have in spreadsheets. What he came up with was pretty ingenious:

Last week our team got a request to determine the value of a healthcare education campaign. They had excel sheets containing the lists of potential patients and wanted to pull any matches in our patient data. Our goal was to pull a fuzzy matched list of patients based on last name and date of birth for them to analyze.

Blending the data wasn’t an option since our patient source is so large and the lists they wanted to pull were only kept in excel (and may or may not be updated regularly). They also had to look for hundreds of patients at a time, so manually filtering by these patients would have been time consuming. What I wanted was a way to pull all of the patients at once as easily as possible.

What I came up with was to allow the user to enter the full list of patient names and DOBs into a string parameter, and create a filter that would return only the rows contained within that list. This way the user will be able to copy an entire excel column, paste it into the parameter, and pull the data with one search. Below is the calculation I used.





Essentially this calculation concatenates patient name and birth date in our patient encounter data sources and checks to see whether that combination is found in the delimited parameter list that the end-user is providing. Both the parameter and the values from the data source are converted to uppercase to prevent matching issues caused by inconsistent capitalization. So the end user would create a spreadsheet like the one below, copy and paste the values into the parameter in Tableau, and Tableau would respond by providing a list of all the fuzzy matches.



I did have some concern about the character limits on Tableau parameters, but after some quick testing I found they will accept up to around 34,000 characters.

I've created a example of this using superstore sales data matching on customer name and order date. You can play with this yourself by clicking the image below. Test by entering this string into the parameter: "Nicole Brennan, 10/22/2011|Georgia Rosenberg, 11/21/2011|Ricardo Emerson, 12/29/2011|Craig Molinari, 3/1/2012|Valerie Takahito, 4/5/2012|Pauline Chand, 8/1/2012|Andy Gerbode, 9/7/2012|Peter Fuller, 9/17/2012|David Philippe, 10/10/2012|Craig Carroll, 10/23/2012|Sam Craven, 11/10/2012|Lycoris Saunders, 11/14/2012|Duane Huffman, 11/15/2012|Kelly Williams, 11/20/2012|Harold Dahlen, 11/22/2012|Guy Phonely, 11/26/2012"


https://public.tableau.com/views/FuzzyMatchwithParameter_0/FuzzyMatch?:embed=y&:showTabs=y&:display_count=yes

UPDATE:
One of the comments on this post asked why you wouldn't simply copy and past the values into a custom quick filter. I had two thoughts on this. First, the approach described above can better handle capitalization discrepancies because you can run LOWER() or UPPER() on both sides of the equation. Second, I assumed this approach would be faster since it doesn't have to scan through two large lists. To test this theory, Nick ran this query with thousands of names against a data set with millions of records. The result showed that the approach described in this post was more than 7x faster than simply copying the values into a quick filter. Pretty cool.

2021 Wk 7

Challenge: https://preppindata.blogspot.com/2021/02/2021-week-7-vegan-shopping-list.html

The 2021 Week 7 #PreppinData challenge introduces the use of the GROUP_CONCAT() and INSTR() functions in Exasol and expands on the use of scaffolding techniques to unpivot data contained in a single field with delimited values. I also used CTEs, regular expressions, and built a view to avoid code repetition for this challenge. The full SQL solution is posted below.

Lines 1-23 create our table structures and load the data into Exasol. Refer to the first post in this series for more about these DDL / DML statements. The bulk of the work is saved in a view that starts on line 25 so that it can be reference to generate the two outputs which are just pulling from the view with different filters. This is a good practice so that you don't repeat code multiple times for different uses and later have version control issues when minor changes fork from the original code.

The view contains a couple of common table expressions (CTEs) that manipulate the keyword dataset. The challenge with this one is that the list of keywords exist in two discrete columns as a comma separate list (shown below). The first column has ingredients and the second column has E numbers used to identify food additives. These two lists are not related, so they ultimately need to be concatenated. In retrospect I probably could have eliminated the second CTE by concatenating the two fields in the first, but I'll just explain the steps as I did them. 


The first CTE on lines 26-33 named "keyword_tall_pass1" converts the two comma separated lists into one record per value as shown below. This is accomplished by cross joining to a statement that uses the CONNECT BY hierarchical query functionality that generates 30 records for us on line 32. Thirty is just a number I chose that is large enough to capture every value in the two lists. On line 33 I drop the excess records that didn't need because there were only 16 items max between the two lists. The magic here is with the REGEXP_SUBSTR() functions. I used pattern matching to capture just alphabetic characters for the first list (line 29) or numeric characters for the second list (line 30) and kept the nth matching instance where the nth value is the RecordID value I generated on line 32. The result of this CTE is shown below. So you can see "Milk" was the first word followed by "Whey", "Honey", etc. from the screenshot above. Likewise for the second list of E numbers.


The second CTE named "nonvegan_keywords" on lines 35-38 just takes the Ingredient and ENumber columns shown above and stacks them on top of each other with UNION ALL. The ALL qualifier tells the query compiler not to bother checking for duplicate values among the two expressions. I also needed to append the letter "E" to the beginning of each number. You can concatenate string values with a double pipe operator "||". It turned out that the E numbers weren't found in the shopping list data, so none of that data was used anyway.

The final SELECT statement for the view appears on lines 41-50. This query uses the shopping list as the base table (line 48) and cross joins to the "nonvegan_keywords" CTE (line 49) so that each shopping list product's ingredients can be compared to every keyword in the list individually. I do this with the CASE statement you find on lines 46-47. Exasol is case sensitive, so I forced both the ingredients and the keywords to be lowercase and used the INSTR() function to see if an individual keyword is found in the list of ingredients. INSTR() returns the character location of the found text in a string, so if it is greater than zero I return the matched keyword. Any non-matched keywords are ignored and return NULL. 

The case statement is wrapped in a GROUP_CONCAT() function, which is an aggregate function for string data. By default it comma delimits the string data with the group, but you could choose a different delimiter. I then grouped by the Product, Description, and Ingredient fields on line 50 to get the dataset back to one line per product on the shopping list. The results are saved in a view (line 25) so I can call all this code again for my output queries.

The two output queries on lines 53-63 are very simple and largely the same. One just filters for products with a NULL value for the "Contains" field and the other for non-NULL values. This means one list is vegan and the other list is non-vegan.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
CREATE OR REPLACE TABLE DEV_MAJ."PD_2021_Wk7_ShoppingList" (
    "Product"     VARCHAR(255),
    "Description" VARCHAR(1000),
    "Ingredients" VARCHAR(2000)
);
CREATE OR REPLACE TABLE DEV_MAJ."PD_2021_Wk7_Keywords" (
    "AnimalIngredients" VARCHAR(1000),
    "ENumbers"          VARCHAR(1000)
);
IMPORT INTO DEV_MAJ."PD_2021_Wk7_ShoppingList" FROM LOCAL CSV
    FILE 'C:\Mark\Preppin Data\PD 2021 Wk 7 Shopping List.csv'
    SKIP = 1
    ROW SEPARATOR  = 'CRLF'
    COLUMN SEPARATOR = ','
    COLUMN DELIMITER = '"'
;
IMPORT INTO DEV_MAJ."PD_2021_Wk7_Keywords" FROM LOCAL CSV
    FILE 'C:\Mark\Preppin Data\PD 2021 Wk 7 Ingredients.csv'
    SKIP = 1
    ROW SEPARATOR  = 'CRLF'
    COLUMN SEPARATOR = ','
    COLUMN DELIMITER = '"'
;

CREATE OR REPLACE VIEW DEV_MAJ."PD_2021_Wk7_ShoppingListKeywords_vw" AS
    WITH keyword_tall_pass1 AS ( --generate unique rows for ingredients / e-numbers
        SELECT
            i."RecordID"
            ,REGEXP_SUBSTR(k."AnimalIngredients",'(?i)([a-z]+)',1,i."RecordID") AS "Ingredient"
            ,REGEXP_SUBSTR(k."ENumbers",'(?i)([0-9]+)',1,i."RecordID") AS "ENumber"
        FROM DEV_MAJ."PD_2021_Wk7_Keywords" k
            CROSS JOIN (SELECT level AS "RecordID" FROM DUAL CONNECT BY level < 30) i --30 is arbitrary
        WHERE local."Ingredient" IS NOT NULL OR local."ENumber" IS NOT NULL --drop null records

    ), nonvegan_keywords AS ( --stack ingredients / e-numbers
        SELECT "Ingredient" AS "Keyword" FROM keyword_tall_pass1 k WHERE k."Ingredient" IS NOT NULL
        UNION ALL
        SELECT 'E' || "ENumber" AS "Keyword" FROM keyword_tall_pass1 k WHERE k."ENumber" IS NOT NULL
    )

    SELECT --return products w/ delimited list of matching ingredients
        sl."Product"
        ,sl."Description"
        ,sl."Ingredients"
        ,GROUP_CONCAT(
            CASE WHEN INSTR(LOWER(sl."Ingredients"),LOWER(nvk."Keyword"))>0  --when ingredients contain keyword
            THEN nvk."Keyword" END) AS "Contains"
    FROM DEV_MAJ."PD_2021_Wk7_ShoppingList" sl
        CROSS JOIN nonvegan_keywords nvk
    GROUP BY 1,2,3
;

--OUTPUT 1: Vegan Shopping List
SELECT slk."Product", slk."Description"
FROM DEV_MAJ."PD_2021_Wk7_ShoppingListKeywords_vw" slk
WHERE slk."Contains" IS NULL
ORDER BY 1;

--OUTPUT 2: Non-Vegan Shopping List
SELECT slk."Product", slk."Description", slk."Contains"
FROM DEV_MAJ."PD_2021_Wk7_ShoppingListKeywords_vw" slk
WHERE slk."Contains" IS NOT NULL
ORDER BY 1;

I hope you found this exercise informative. If so, share with your friends and colleagues on your favorite social platforms.  If there is a particular #PreppinData challenge you'd like me to re-create in SQL, let me know on Twitter @ugamarkj.

If you want to follow along, Exasol has a free Community Edition. It is pretty easy to stand up as a virtual machine with the free Oracle VirtualBox platform. I use DataGrip as my favorite database IDE, which is paid software, though you can use the free DBeaver platform if you prefer.
Popular Posts
Popular Posts
About Me
About Me
Blog Archive
Labels
Labels
Other Links
Subscribe
Subscribe
Total Pageviews
Total Pageviews
1385910
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.