Tableau Conference: IT Goodness

The Tableau Conference is near! I’m sure most of you are as excited about it as I am. This year promises to, once again, deliver on Tableau’s unique ability to provide a fun and functional conference.

This year, there will be a strong focus on some great IT sessions, meetups and more! So, visit this link and check out what’s going to happen (I’ll be there). If you’ve got questions, please let me know.

See you in Austin!

-Mike

 

 

 

 

Automatically Choose the Max Filter Date in your Tableau Filter

Here’s a quick tip for Tableau users out there who may have tried to get their individual date filter to automatically choose the max data while (here’s the catch) still allowing for the drop down with other dates in it. Sure, it’s possible to use the LOD calculations to get you a Boolean but that would be too easy. Sometimes, the people just want to max date selected in their filter automatically. Oh, and, this also works for dynamic parameters.

dynamic-filter-2
I really want the max date. Please.

Here are the steps:

  • SQL to query for max date in column (this is the field used in your filter)
  • Match the date format (eg: mm-d-yyyy or mm-dd-yyyy)
  • Update the workbook section with the filter
  • Save workbook
  • Re-publish

The little bit of code you’ll need to dynamically do this will look like:

[xml]$TsUpdateTWB= get-content "someWorkbook.twb"
$TsUpdateTWB.SelectNodes('//filter/*') | where {$_.level -like '*Status Date*'} | % {$_.member = "#$tsFilterMax#"}
$TsUpdateTWB.Save("someWorkbook.twb")

That’s it! Since PowerShell makes it so easy to deal with XML, all you need to do is find the filter name and update.

This method can work with: Excel files, databases and everything in between. As long as you can query it, you’re golden.

What I’ve done to make this totally automatic is to use the File Watcher class to trigger this to run as soon as someone either saves a file or drops the name of the extract in Slack.

Please let me know if you’d like a demo or just come the TC16 and I’ll be showing this (and more)!

Stop a Tableau Extract Refresh without stopping Server

Oh no! I kicked off this Tableau extract and I need to take it back!! How do I stop it?

As long as I’ve been using Tableau, the ‘run-away’ extract has always eluded me. Sure, there’s netstat and procexp that can sort of get you there in a pretty raw way. If it’s late at night and there’s just one job running, that’s pretty easy to find the port and process ID (netstat -ano in PowerShell 3+).

bye-bye-ts-2
Oops. We didn’t want to run that one.

The challenge is when you have multiple backgrounders (on multiple machines) and they’re all active; you then start asking: which one is it? Hopefully, you guess correctly and, voila, it’s gone.

Until now.

bye-bye-ts-1
The process id for the currently running extract

Here’s what we’ll do:

1.) grab the log from this directory: C:\ProgramData\Tableau\Tableau Server\data\tabsvc\vizqlserver\Logs and follow (or parse) this log: backgrounder*.txt

NOTE: If you have a Log Analytics strategy, you can easily follow this log file and leverage some of the vendor’s command line tools to make this process entirely automatic.

2.) Wrap your code in a PowerShell function which allows you to enter your TWB/TDS name.

Stop-TsExtract -ExtractName '<some name>'
bye-bye-ts-3
The result of stopping the process.

3.) The file in #1 is JSON so it’s super easy to parse and dig out the ‘PID’ which, believe it or not, corresponds to the process on your Tableau Server box. We’re going to look for this key/value pair:

k="ds-parser-connect-extract" AND v.caption=<name of your data source/workbook>

NOTE: You could also look for the session via “k=lock-session” (which you then have to correlate from the other backgrounder log file) but this next value gives you the ability to grab (and enter) the data source/workbook name.

4.) Now, if you’ve set up PowerShell remoting or SSH, you remote into your Tableau Server (via Invoke-Command) and enter (where the variable procID is the backgrounder process):

gps -Id $procID | kill -Verbose

 

BONUS

If you remember, I posted about refreshing your Tableau extracts via Slack here. Well, the next step (if you want to open this up outside the admin group) is the let users drop the name of a data source/workbook in a Slack channel. Those background jobs they enter will be stopped if they’re accidentally started.

Oh, and, I should mention: test, test, test.  This isn’t supported by Tableau, but people believe it should be:) Make your vote count.

Tableau Server Performance fun with Slack and Logentries

The Beginning

It started as a joke. Then it became a pretty good idea.

You see, often times Tableau Server admins get a lot of flak. We *constantly* get comments stating:

  • Tableau is slow
  • This workbook is slow
  • Tableau can’t do this because someone I know said as much
  • I like <insert silly dashboard tool here> more because it does <something I can’t remember right now but I’m just trying to sound smart>
  • You’re just an analyst so you don’t know how to optimize data
  • and more!

Let’s be honest, the above comments can, 99% of the time, be tied to someone who designed something incorrectly but wasn’t aware of the implications of said design. Until now 🙂

And in a natural way that a Slack conversation can allow, the comment dropped: ‘It’s like a dumpster fire’

Inspiration!

It goes like this:

  • Slow TWBs trigger alerts all the time on our platform (Server admins should know these bottlenecks already)
  • We pull log data (yes, you can also pull from Postgres but logs are so rich) for those queries via Logentries
  • We parse the log data and convert to the unruly string data into something usable (thank you PowerShell and, specifically, ConvertFrom-String)
  • At an interval of our choosing, we drop the results in Slack (only our team) with a mixture of funny GIFs (because levity is a good thing)
  • We analyze and reach out to the workbook owners for learning opportunities & improvement

Details

ts-slow-content-2
Monitoring the 90th percentile of workbook load time

 

This is the trigger and the cause of much confusion around what Tableau can actually do. You see, if the performance concerns aren’t addressed, every Server admin is going to get the ‘Tableau is slow’ argument. At that point, you’re left defending the platform you set up. But, the question and concerns should all be about what is *causing* Tableau to appear slow.

We address the performance concern with a solid Log Analytics strategy. The above image is one of many examples of alerts we’ll get. This time, we’re going to leverage the Logenties CLI to automatically pull this info out. Yes, automatically.

Here’s what we’ll use:

lecli query -n vizql -q slowwb -f $start -t $end | Out-File -FilePath $workpath\slowwb.txt -Force

The start and end variables are timestamps; we usually do a rolling 24 hours.

ts-slow-content-0
The output of the Logentries query. Not optimal for parsing/sorting, etc.

If you haven’t explored PowerShell’s ConvertFrom-String cmdlet, you’re missing out. It’s pretty remarkable what it can do with a template file and some string data. And it’s based off of some solid (and profound) research.

ts-slow-content-1
Example template file for ConvertFrom-String. We’re implicitly defining some structure and letting the cmdlet work its magic

After you have (1) pulled the log data and (2) set up your template file, run this:

ConvertFrom-String -InputObject $((gc $workpath\slowwbclean.txt)|Out-String) -TemplateFile $workpath\template_slowwb.txt | select Name,Value

Once you do that, you get a beautiful PowerShell object for which the possibilities are endless (well, as much as you want them to be).

So that string data from above is now easily manageable and prepared for a Slack channel drop.

ts-slow-content-5
A real PowerShell object

Enter Levity

Here’s what the daily GIF might look like in our Slack channel. No, no one sees this but us and the ‘movement’ of it really forces us to pay attention. We’re (as Server admins) responsible for maintaining and teaching others how to best use the platform. If we ignore it, then the tool (Tableau) gets misused and decision makers may start to see if the grass is greener.

Again, and I can’t stress this enough, levity is a good thing. I can’t tell you how many times I’ve seen C-level people view workbooks that take 3+ minutes to load. Three minutes of awkward silence. That. Scares. Me.

So we monitor and when it gets bad and we’re being questioned about the slowness, we need to laugh. That’s healthy and I can’t be convinced otherwise. If you’re intentions are wrong, you’re not being human.

This:

slowWB
It moves.

Becomes this:

 

ts-slow-content-4
Daily performance metrics

That’s it! Pretty easy, right? So what else can we do with this?

Well, here are a few ideas, based off of the magic of GIFs.

And remember, be responsible and kind. There are too many rude people on this planet. One more isn’t a good thing.

(1) Slow extracts become:

wrap_it_up

(2) Slow login times on Apache become:

tumblr_lil0h0CeAx1qavy44o1_500

(3) Large Server downloads become:

file.gif

Tableau Conference 2016: Server Admins

Make sure you come to the Tableau Server Admin User Group meeting at the conference. I’ll be speaking!

http://tc16.tableau.com/learn/sessions/3640

Here is the abstract:

While there are numerous and exceptional benefits in administering Tableau Server via the GUI, the hidden gem is its capability for automation and integration. In simple terms, automating as much of the administration and monitoring makes for a very happy Tableau user base. In this session, you’ll learn how having at least some automation can make your environment faster and leaner.

We’ll automate everything from user provisioning (and removal), auditing views, securing content, and ‘Garbage Collection’ (or just removing old content). Want integration too?! We’ll show you how to reach pretty much anything via the REST API and trigger extracts via tools like Slack.

Oh, one more thing. We’ll show a new platform called Tableau Working Wax: or the ability to automatically generate reports and deliver them to the people.

In the end, you’ll be on your way to a fully automated and healthy Tableau Server infrastructure.

See you at #data16

-Mike

 

Tableau Server: Get Bytes

Continuing the Log Analytics theme for Tableau Server, specifically the ‘Monitor’ pillar of A.I.M., it’s time to show another quick tip for analyzing your Apache logs. Understanding how much data is moving through the wire is another technique one could use to decide whether dashboards, CSVs or general crosstab download needs to be optimized. I shouldn’t need to mention it’s also a great way to monitor the security and integrity of your data.

apache-bytes-0
The ‘Alert’portion. If something is outside the norm, we’re alerted. 

We’ll use the module I loaded here to do the heavy lifting. Once you export a csv, load into Tableau for some analysis (or perhaps mash it up with the geo dashboard I demoed a while back).

You’ll want to pass this query for the ‘leFilter’ section: ‘/HTTP\/1\.1″ “-” \d{3}\s(?P<size>\d*)/ AND size>1000000’

The idea is that, over time, you’ll be able to see what is being requested with the most amount of bytes and, potentially, a better way to optimize your content. Further, it’s a fantastic way to be alerted (‘Alert’ portion of A.I.M.) on large (consistent) GET requests from questionable sources.

In the end, this all fits together with time-to-serve data and other performance and security related inputs.

 

Tableau Server Log Analytics: Easy Parser

I’ve mentioned before but it’s worth mentioning again: Log Analytics and Tableau Server is a wonderful thing. There’s a ton of helpful information in Tableau logs (and pretty much *all* logs) which, along with the PostgreSQL data, make for a very good data toolbox.

I’ve also mentioned Logentries a lot when digging through Tableau logs. There are many reasons I use the tool, but the one which makes it the most useful is: centralized log analysis. Essentially the workflow goes like this: Tableau –> Logs –> Logentries –> Tableau (and around and around). It’s a positive feedback loop of valuable data which can give you insight into things such as:

  • What workbooks take the longest to load and *where* they are geographically
  • What user downloads the most data, how much and how long does it take
  • Http 404s
  • Filters used by a user/workbook
  • Data sources used by a user/workbook
  • and more!

With Tableau, you’re either leveraging a Log Analytics strategy or you’re not. I cannot stress how vital it is for Tableau Server administrators to at least have some plan in place for when you are suddenly inundated with a ‘slow’ server and/or site.

That said, often times it’s easier to have a few functions and tools to make the ad-hoc or automated analysis easier. Here’s one: we’ll wrap the Logentries REST API in a PowerShell function. This will simply allow us to pull log data from Apache or VizQL based off of a simple parameter.

What’s returned is a neatly formatted csv which you can then import into Tableau, add to a database or simply do some quick research. For example, if you want to ensure excessive 404s are handled, you can simply use this function with a filter, parse, and lookup the offending IP address.  If necessary you’d add those IPs to a firewall rule.

More specifically, here’s an example of how you would use the function in PowerShell:

Get-TsLog -leAcctKeyVizQL 'your VizQL key' -leFilterVizQL 'k=end-update-sheet' -workpathVizQL "C:\users\$env:username\Documents" -apikey 'your Logentries API key'
vapor_rub_0
Here’s where your log data (parsed) can become a great means to improve performance and react before things happen

The added benefit of adding this type of aggregated data to your own Tableau data model and database is that it gives the admin some data for historical purposes, planning and learning.

So, here’s the module on the PowerShell gallery. Let me know if there are questions.

 

Slack your Tableau Extract – Part II

Ever wish you could drop the name of your Tableau workbook and/or data source into a Slack channel and have it automatically refresh? What if you’re developing and need to get a quick refresh completed? What if you don’t have tabcmd installed on your machine? What if you want to add step at the end of your pipeline that drops the name of the content into the Slack channel?

ts-refresh-extract-0
Example of name of Tableau content

I’ve talked about this before but what had to happen was the extract needed to exist in the ‘background_jobs’ table. Well, that won’t always happen as people will be doing this for the first time. So, we needed to expand it a bit to include *all* possibilities (workbook and data sources). Also, in this much improved version, we Slack back to the user and let them know their extract is completed.

ts-refresh-extract-2
Process for *each* extract (all dynamic) 

That’s the beauty of the ‘Tableau-Slack-Logentries‘ integration. When you have a decent amount of parts, the whole becomes a fascinating thing.

Here are the steps:

  • get the data from the Logentries webhook
  • Process the data for each extract
  • getting current slack users: don’t need to do this often (unless you want to)
  • getting valid list of workbooks and data sources
  • Processing list of extracts : rolling 24 hours
  • getting valid list of workbook / data source owners
  • create Slack content object: basically it must add up to a certain number to run (for example, if the person who dropped the name in the channel isn’t the owner, it won’t succeed).
  • Log it!

 

ts-refresh-extract-3
Tableau Server example of completed extract. 
ts-refresh-extract-1
Corresponding message back from Tableau Server

If anyone is interested in the code and / or a demo, please let me know and I’ll be happy to show it.

Automatically remove (and archive) extracts that fail more than ‘n’ times

Keep it clean

Every now and then, Tableau extracts aren’t refreshed properly. That, in and of itself, isn’t a bad thing. What we’re worried about are the extracts that continue to fail day after day. They consume resources and, most importantly, don’t give a true picture of the data.

ts-extract-fails-0
Continual extract failures

Here’s a simple script that queries postgres and grabs the extracts which have failed n times (you can choose your threshold). At a very high level, it does the following:

  • Get list of failed extracts (most recent fail date should be the current date)
  • Limit to those only above your desired threshold
  • Use REST API to pull extracts from Tableau Server
  • Do a diff on list and downloaded files and remove only those which are equal
  • Archive twb/twbx/tds/tdsx so users can refer to this later
  • Delete content from Tableau Server
ts-extract-fails-1
Here is how we dynamically switch from site to site *and* from content resource (data source or workbook) with the REST API

Taking it a step further

If you have a Log Analytics strategy in place, you can send the output of the script to a file and have the log agent follow it. This will give you and the rest of the Tableau admin team some insight to what is failing (other than what I’ve talked about before).

You can also integrate with Slack in order to notify the user/owner that his/her workbook will be remove until the failure is fixed.