Friday, February 23, 2024

Alteryx Cloud Quest 2 [includes spoiler]

This week's Alteryx Cloud Quest was to use the tools to determine the percent of customers who should get a happy meal discount.  Using SQL, this is pretty straight forward.  So why would you use Alteryx tools and handle the data step by step?  Because not everyone understands - or even wants to - SQL.  Let alone standing up the tables somewhere in the first place so that they can be queried. Personally, I think everyone should know some SQL in IT because it's a base layer applicable to so many other architectures, but I do get that it's a bit like a foreign language as far as some people are concerned.  And, if you're using a tool like Alteryx, in the end you can parameter and add some inputs so it's reusable, again, without knowing SQL.  You could build a UI over your SQL, and tools exist to automate that [and have forever], but....choices are choices.

I will say that this was fun for a specific reason....the general pattern was the exact pattern that underpins a Jira query workflow I wrote this week for status updates.  So...similarly...I could have used JQL [Jira Query Language] and a macro to get the % done details for a specific subset of epics out of Jira, but laying it out using a Jira connector [to the API] I was able to get at a lot of fields I wouldn't normally dig around in [the Jira fieldset is over 1400 fields] and lay it out the way I wanted to show story count, un/completed story count/% done [raw count, no story points], and even watchers [which is an interesting way of showing how "hot" or interesting an epic is].  Regarding that last point, a better way to handle it would be to count watchers on the stories, not the top level entity.  But that's future work.  The general pattern was applicable to a whole pile of JQL-like work/reports I wanted and they were all a short copy and modify from my base.

So a. It works. b. I can extend it to whatever parameters I want, and c. I think I can use it for status updates instead of doing anything manual, saving myself a bunch of time and, more importantly, a lot of unnecessary and confusing conversations.  If things were optimal, I'd write it out to Confluence each week and pretty much avoid all conversation, but even after base64-ing my email + key, Atlassian refused to acknowledge I was allowed to do so much as a get via CURL.  I'm pretty sure work has the corporate wiki locked down tight [I did do something similar at my legal job, writing the Cascade pipe and filter data jobs into SVG as they ran and dumping them with their params into visual "run" pages in the wiki that captured the specifics of each run.  Amazingly useful.  Amazingly underused.  I think people wanted deniability for bad runs.

Here's my output.  I could have been a little more efficient, but overall it does what it's supposed to.  A number of other submissions avoided certain tools, like crosstab, altogether.





No comments: