Skip to the main content.

ScriptRunner Blog

How to Establish Simple Server Monitoring via PowerShell

Table of Contents

Post Featured Image

Visibility into your infrastructure is absolutely essential, both when optimizing performance and preventing issues from cropping up. Real-time and even periodic monitoring are powerful practices. Naturally, there are numerous ways to go about doing so. There’s also a multitude of tools.

While death by choice seems like the initial problem, the true challenge is customization. Every team has different preferences. Server architectures and configurations differ greatly. Unfortunately, for some use cases, even the best tools must make assumptions — for better or worse. Such rigid approaches might actually contribute to increased downtime, undermine performance, and decrease security.

 

Companies running Windows-based servers have an excellent option at their disposal: PowerShell. The powerful scripting language is highly flexible. Because PowerShell relies on human-friendly coding, it’s relatively easy to get started. What’s the best part of writing your own code? You can design solutions around any parameters you choose. That’s not to say PowerShell scripting has no learning curve. However, the potential for substantial costs savings is high. You also won’t be held hostage by a third party’s support availability, should problems arise (as they always do).

Establishing monitoring is dependent on knowing one’s priorities — and the basic mechanisms behind monitoring. We’ll first dive into those.

 

Monitoring and PowerShell Essentials

First and foremost, you’ll need to figure out what you want to monitor before getting started. Many a company has put the cart before the horse; resultant monitoring solutions have turned out subpar. Multiple factors may be vying for your attention:
  • Performance
  • Process utilization
  • Memory and CPU usage
  • Network and user activity
  • Server response times
  • Tread utilization
  • Bandwidth and throughput

This list isn’t exhaustive, but should be a decent starting point. Reaching a consensus on focus areas is essential — even when creating simple monitors. Say we want to stray from heavy metrics, however. Telemetry data is useful, yet it might be too detailed for your purposes.

 

Keeping Things Simple

For example, we know a server at its most basic may be active or inactive. Performing that status check is fundamental to maintaining your ecosystem. Because server requests are made over the internet, HTTP responses from certain origins are expected. Investigation is likely necessary if a port goes offline.

You must also create a basic chain of escalation: what issues, at what severity, will grab whose attention? PowerShell scripts trigger a pre-determined action, which leads to a desired result. This is often called a state. If a bad state is discovered—like an incorrect response—someone must be notified.

This might be someone on the IT side, DevOps side, or even technical higher-ups. One person might be enough to mitigate the issue. Multiple employees might have to extinguish a fire. Prompt alerting via PowerShell can help resolve hiccups faster.

 


Writing Your PowerShell Monitors

Your monitors will be contained within a PowerShell script—an encompassing file comprised of multiple, written functions. You’ll define actions, good states, bad states, and outputs within said functions.

You’ll also include your desired trigger frequency. PowerShell scripts only run once by default. We circumvent this minor snag by creating a task, which can call a script whenever the user wishes. This is possible through PowerShell’s looping functionality. The language’s organized, variable-based syntax gives us the flexibility to define freely.

Any scripting file you’ll create for PowerShell will be saved with a source code file extension. This might appear as

C:\ServerMonitors.psl
within your system; or you can name it whatever you’d like.
 

Building Blocks

Next, you’ll want to know your verbiage. Managing services can introduce a host of functions into the equation. However, we’ll want to focus on some key cmdlets:
  • Get
  • Test

Each cmdlet also includes a modifier, which provides context for the request. This is connected to the cmdlet by a dash, e.g. Get-Service -ComputerName. If you want to test your server’s functionality, you may elect to code something like Test-HTTP. These cmdlets are preceded by function notation, so that one’s first line may look like this:

function Test-HTTP {

Functions also include parameters. If you don’t define parameters for your servers, then your PowerShell scripts won’t discern between different outputs. These general functions will return general results. Properly defining parameters (like computer names, good responses, or bad responses) will ensure your monitoring output is contextually informative.

If you want to define a positive response, such as an HTTP status code, you’ll have to denote this as an integer ([int]). Such codes might be 200 (successful, OK response), 202 (accepted, but waiting), or anything else related to server functionality. Because servers ultimately respond to API requests, it’s useful to design a monitoring function that confirms healthy responses. You’ll also need to include strings ([string]) to denote key monitoring objects or data types.

Bild1-2

What a basic HTTP response test function might look like. Courtesy of Adam the Automator.

You’ll also want to validate and bake condition statements into your PowerShell functions. This will instruct your server to produce certain outputs—should statuses be true or false. PowerShell will follow these instructions accordingly, and return a valuable result to you or your team.

 


Kicking off Your PowerShell Monitoring

Building your monitoring functions is step one. However, you’ll also have to write a mechanism for triggering these monitoring scripts. The “invocation” script activates your relevant pre-made functions, while defining any supplemental tasks pegged for completion. This naming scheme follows our previous monitoring scheme:
C:\CallServerMonitors.psl

Structure is incredibly important here. You’ll be organizing your various monitors into hash tables, which act as compact, organized data structures. These contain key and value pairs. You might define your server names and monitor names. This invocation script can also trigger alerts based on backend results. By doing this, you can also apply specific rules as desired.

 


Frequencies and Final Considerations

Your external tasks will activate these core scripts as often as you wish. While it’s useful to monitor some server parameters more often, others are decidedly less mission-critical. Microsoft considers these monitoring scripts to be idiomatic, since they’re highly dependent on cmdlets and functions.

Running large numbers of PowerShell monitoring scripts simultaneously might incur a resource cost; since they don’t leverage .NET directly, they utilize the pipeline. It may be best to stagger scripts that NEED to run often and those that aren’t critical. Thankfully, the inherently-lightweight nature of your functions will help mitigate those concerns.

These examples aren’t exhaustive. There are many more monitoring avenues—you can count on PowerShell to have your back when it comes to server visibility.

 
Webinar: PRTG & ScriptRunner – Monitoring & Automation at its best

 

PRTG and ScriptRunner – Monitoring and Automation at its best

Find out how PRTG Network Monitor and ScriptRunner work hand in hand to fulfil all your IT infrastructure monitoring needs!
 
 
 
 

Related posts

2 min read

VMUG Webcast: Mastering VMware Management with PowerCLI

3 min read

Automate your spring cleaning with PowerShell

5 min read

Mastering PowerShell with Get-Help

About the author: