Setting up Turbonomic Action Notifcations to Slack Channels

An interesting use-case that I’ve bumped into lately is where folks want to enable automation, but they also need to know when automated things happen. Email was the common platform for notifications, and still is, but there are many more organizations adoption Slack for day-to-day activity monitoring and building out interesting interactive ways to enable the ChatOps approach to IT operations management.

Since you may have followed along my first article which showed you how to set up a custom WebHook integration for your Slack team channel, we will take that one step further and show you how to configure Turbonomic to send notifications of actions to your Slack channel.

Setting up Action Scripts in Turbonomic

One of the cool features within Turbonomic is something called Action Scripts. These are scripts that are run when a particular actions happens on a particular entity within the environment. Action scripts run at different times in the process including before (PRE) and after (POST) the action so that you can either get notification or to trigger some interaction with the action.

Action Scripts run for every action time available including moves, scale/resize, and more. The naming of each Action Script is relative to the timing (PRE/POST) and the action type. You only need to create one Action Script which is hosted on your Turbonomic control instance and launched by the Turbonomic engine as actions are triggered.

The official documentation on using Action Scripts is here, but for our purposes here I will give you a crash course in creating a PRE move script so that we can send Slack notifications when an application workload is about to move.

Variables Accessible during Action Script Execution

There are a number of environment variables which are generated when a Turbonomic action is instantiated. Some of these include:

$VMT_TARGET_NAME – the entity which is subject to the move action
$VMT_CURRENT_NAME – the source location where the entity is located
$VMT_NEW_NAME – the destination where the entity will be moved
$VMT_ACTION_NAME – the unique ID for the action

These are the ones that I’ve chosen to include for my Slack notifications because I will want to know the workload which is subject to the move, the source location, target location, and then having the ID of the action is helpful for auditing and also for more deeper integration with a true ChatOps approach that we will dive into further in another post.

For now, the Slack notifications will be simply to log for us using our Slack channel whenever there are moves occurring. You can select from any of the different actions in the Action Scripts, so this is a good place to start.

The Script

The simplest view of the script is as follows. Simply create a file named which is the one that is called by a move action. This could be anything from a VM migration across hosts, clusters, or also container pod changes and more.

We need to leverage the action variables that we have been given and pass them into the our Slack API call. The simplest method for this is to inject a cURL command into the Action Script that will run using the native cURL command available on your Turbonomic instance.

The command to post to the API for Slack requires your WebHook URL which you can get by following this guide that helps you get the WebHook set up.

This is the full GitHub Gist of the code. If you have existing Action Scripts in the folder, you can simply append these lines to your existing script.

Take note of the use of quotes within the command line as we need to pass the variables into the cURL command which requires additional double-quotes around the entire command.

Last step – Enable Action Script for Moves in Turbonomic

At the time of this writing, the Action Scripts features are still in the traditional flash UI. Go to the Policy view in your Turbonomic instance and expand the Action | VM section where we will enable the Action Scripts for Virtual Machines in this case.

Simply check off the Action Script Settings setting for the PreMove action and you are all set. In the image above we can see that I also have Move actions automated which may be set to Manual for your environment.

NOTE: Enabling policy changes within Turbonomic will trigger a refresh of the actions. This is because the state of your policies has changed and the entities in the environment must shop for the appropriate resources to satisfy their demand based on the newly formed policy configuration. This is the nature of the system being real-time so that no actions are held when they could be stale or unnecessary due to other environmental changes that have occurred.

The Slack View

Under your Slack channel, you will now begin seeing notifications whenever an action occurs. This is what your channel will start to look like as the moves take place:

In my case, I have enabled full automation; This means that these actions are triggered and the notification is done as the action is about to occur. We can also do POST_MOVE script which is handy if we are building out other hooks.

The goal with Action Scripts is to be able to integrate with any application lifecycle management process or product.  Look for much more in the coming weeks as we walk through some more integrations that can be done with this method.

Why Google Needs Consistency for Enterprise Cloud Customers

Remember Google Buzz? Orkut? Wave? Reader? Google Talk? Then there was Google Picasa…which became photos…so far. There are sites dedicated to what we call the Google Graveyard. This doesn’t even get into the Google Glass, Site Search, Search Appliance and others. I logged into my Google Analytics platform today and found it to be a completely different UI and UX than I have ever seen before…without warning. I used to use Google Hangouts On Air for the Virtual Design Master event every year until this year when HOA no longer works, so I have had to move to using Zoom and pushing to a Youtube Live Event.

The reason that I bring these up is that we have an optics problem with Google which may affect how many potential enterprise cloud customers choose to adopt, or rather to not adopt, Google Cloud Platform. One of the big things that traditional enterprise customers enjoy is the warm embrace of platforms that have consistency. Google has tended to have some challenges around product changes and the public face of those changes. Google most likely has lots of data backing the decision to shift or sunset a product.

Can GCP make Enterprises Greene with Envy?

Diane Greene has come over to Google by way of her most recent startup Bebop being acquired. It’s my opinion that the startup was the packaging in which they could acquire the real value, which is Diane herself. Diane has a proven past success in launching a little virtualization concept into the juggernaut that became VMware. The most recent Google Cloud Next event featured a strong presence of a new focus on the enterprise with an aim to become the number 1 public cloud provider within five years.

A quote that stood out from the event was “I actually think we have a huge advantage in our data centers, in our infrastructure, availability, security and how we automate things. We just haven’t packaged it up perfectly yet.” which highlights the challenge that Google will face. The need for many enterprises is a packaged and neatly consumable product that we know we can adopt and maintain with long support plans and clean deprecation.

There is little doubt of the ability of Google to develop incredible products which will give birth to next-generation application infrastructure that few can rival. The only doubt comes around whether enterprise audiences are going to be ready to adapt to the speed at which Google innovates their product set. If Kubernetes is any sign of how well we are leaning in, then it is very easy to see that Google can take the market on and win a significant share.

Google Cloud Platform will be a juggernaut in the public cloud realm. That is a fact which is being proven out by some major customers moving into the platform already and many more dabbling. Multi-cloud is the new cloud, so GCP will inevitably become a key player in that strategy because of it’s underlying GKE product to support Kubernetes workloads. In my opinion, the multi-cloud approach enabled by containerized workloads with an enterprise-grade scheduler is going to become the goal we should strive for.

The only question is how long it will take before we can all put our trust in one product that Google has lacked in, which is consistency.

Got Logs? Get a PaperTrail: First thoughts

I stumbled upon Papertrail through a Twitter Ad (hey, those things work sometimes!) and figured that I should take a quick look. Given the amount of work I’ve been doing around compliance management and deployment of distributed systems, this seems like it may be an interesting fit. Luckily, they have a free tier as well which means it’s easy to kick the tires on it before diving in with a paid commitment.

The concept seems fairly easy:

The signup process was pretty seamless. I went to the pricing page to see what the plan levels are which also has the Free Plan – Sign Up button nicely planted center of screen:

What I really like about this product is the potential to go by data ingestion rather than endpoints for licensing. Scalability is a concern with pricing for me, so knowing that the amount of aggregate data drives the price was rather comforting to me.

The free tier gets a first month with lots of data followed by a 100 MB per month follow on limit. That’s probably not too difficult to cap out at, so you can easily see that people will be drawn to the 7$ first paid tier which ups the data to 1GB of storage and 1 year of retention. Clearly, at 7 days retention for the free tier, this is meant to just give you a taste and leave you looking for more if the usability is working for you.

First Steps and the User Experience

On completion of the first form, there is a confirmation email. You are also logged in immediately and ready to roll with the simple welcome screen:

Clicking the button to get started brings you to the instruction screen complete with my favorite (read: most despised) method of deploying which is pushing a script into a sudo bash pipe.

There is an option to run each script component which is much more preferred so you can see the details of what is happening.

Once you’ve done the initial setup process, you get a quick response showing you have active events being logged:

Basic logging is one thing for the system, so the next logical step is to up the game a bit and add some application level logging which is done using the remote-rsyslog2 collector. The docs and process to deploy are available inside the Papertrail site as well:

Now that I’ve got both by system and an application (I’ve picked the Apache error log as a source location) working, I’m redirected to see the live results in my Events screen (mildly censored to protect the innocent):

You can highlight some specific events and drill down into the different context views by highlighting and clicking anywhere in the events screen:

Searching the logs is pretty simple with a search bar that uses simple structured search commands to look for content. Searches are able to be saved and stored for reporting and repetitive use.

On the first pass, this looks like a great product and is especially important for you to think about as you look at how to aggregate logs for the purpose of search and retention for security and auditing.

The key will be making sure that you clearly define the firewall and VPC rules to ensure you have access to the remote server at Papertrail and then to make sure that you keep track of the data you need to retain. I’ve literally spent 15 minutes in the app and that was from first click to live viewing of system and application logs. All that and it’s free too.

There is a referral link which you can use here if you want to try it out.

Give it a try if you’re keen and let me know your experiences or other potential products that are freely available that could do the same thing. It’s always good to share our learnings with the community!

Setting up a Slack WebHook to Post Notifications to a Team Channel

If ChatOps is something you’ve been hearing a lot about, there is is a reason. Slack is fast becoming the de facto standard in what we are calling ChatOps. Before we go full out into making chatbots and such, the first cool use-case I explored is enabling notifications for different systems.

In order to do any notifications to Slack, you need to enable a WebHook. This is super easy but it made sense for me to give you the quick example so that you can see the flow yourself.

Setting up the Slack Webhook

First, we login to your Slack team in the web interface. From there we can open up the management view of the team to be able to get to the apps and integrations. Choose Additional Options under the settings icon:

You can also get there by using the droplets in left-hand pane and selecting Apps and Integrations from the menu:

Next, click the Manage button in the upper right portion of the screen near the team name:

Select Custom Integrations and then from there click the Incoming WebHooks option:

Choose the channel you want to post to and then click the Add Incoming WebHooks Integration button:

It’s really just that easy! You will see a results page with a bunch of documentation such as showing your WebHook URL:

Other parts of the documentation also show you how to configure some customizations and even an example cURL command to show how to do a post using the new WebHook integration:

If you go out to a command line where you have the cURL command available, you can run the example command and you should see the results right in your Slack UI:

There are many other customization options such as which avatar to use, and the specifics of the command text and such. You can get at the WebHook any time under the Incoming WebHooks area within the Slack admin UI:

Now all you have to do is configure whatever script or function you have that you want to send notifications to Slack with and you are off to the races.

Top vBlog voting is underway

The shear amount of blogs listed over on Eric Siebert’s vLaunchpad is simply amazing!  I don’t know how many there is listed there – but there is certainly a lot of scrolling that needs to happen in order to get to the bottom – it’s awesome to see just how much information is being shared

The post Top vBlog voting is underway appeared first on

Top vBlog Voting 2017 – Supporting Community Bloggers

Every year we are seeing more and more community contributors in the blogging ecosystem. My own work here at and through my role at Turbonomic in the community has been so enjoyable to be a part of because of the support that I continue to receive from readers and peers in many tech communities.

Eric Siebert has been hosting the Top vBlog voting for years, and it has grown from a handful of participants to a veritable must-read list that covers every aspect of virtualization, networking, scripting, and more. This year I am honoured to be among the contributors listed and am also very proud to have Turbonomic sponsor the voting.

My blog is listed in the voting under my name (just search for DiscoPosse) and my podcast (GC ON-Demand) is also in the running for best podcast.

I would greatly appreciate a vote if you feel that I’m providing content that is valuable, and of course, please extend your votes to all of the great IT community who surrounds us all. For those who know the work that Angelo (@AngeloLuciani), Melissa (@vMiss33) and I do with Virtual Design Master, you will know that many of the participants are also in the voting.

Your support of our amazing blogger and podcast community is always appreciated.  Thank you!

Vote here for this year’s event: