How I accidentally replaced Maester and half of my weekend plans

Patrik Jonsson
Patrik Jonsson
November 12, 2025 ~5 min read 900 words
AIAssessment

If you had told me a month ago that I'd build an assessment framework from scratch - one that would completely replace both Maester and Pester - I would have laughed, said "sure", and gone back to debugging my PowerShell script.

But here we are.

The need for a better way

I've spent a fair amount of time in the world of Entra ID assessments, compliance checks, and security baselines. You know - the type of work that starts as "just a few quick checks" and ends up with a 300-line PowerShell report, ten CSV exports and at least one coffee gone cold.

Tools like Maester are fantastic, and I highly recommend it to anyone, but over time, I started hitting the limits for my specific needs.

  • I wanted consistent structure and multi-language support.
  • I wanted to be less dependant on multiple modules.
  • I needed a tool that could be used for any kind of enviroment, cloud or onpremises.

I know, Maester and Pester can be used for this as well, and is probably better tools in general, but I needed more control!

So I did what every rational person does when frustrated with an existing tool. I built a new one.

Why build my own engine?

At first, I wasn't setting out to reinvent the wheel - just maybe, you know, re-align it a bit.

But I gradually realized I wanted something that:

  • Anyone in the company could use - from PowerShell experts to people who just run the script and hope for the best.
  • Could produce branded, customer-ready reports without 37 lines of HTML concatenation.
  • Would make it easy to add tests, with clear metadata, language support, and validation built in.

The goal was simple.
Assessments should be easy to build, easy to run, and hard to break.

Collaboration - My unexpected AI Co-Developer

Here's where things got interesting.

Instead of spending nights writing function skeletons and debugging parameter bindings, I started talking to ChatGPT. Literally.

What began as "Can you fix this syntax error?" quickly turned into:

  • "Let’s redesign the folder structure."
  • "Let’s move test metadata to Json."
  • "Can you make the report prettier?"
  • "Okay fine, maybe a bit prettier."

It felt less like coding and more like pair programming with a highly caffeinated assistant who never sleeps, doesn't judge your variable names, and occasionally surprises you with "Have you considered a validation command for your test metadata?"

AI didn’t just speed up development — it gave me features I didn't even know I needed.

And somehow, between the two of us, we built something far beyond my initial idea.

From concept to reality

The architecture came together piece by piece:

1. Engine - handles test discovery, requirements validation, and result formatting.
2. Packs - modular set of tests with categories (Entra ID: Authentication, Authorization, Conditional Access...)
3. Self-describing tests - each one powered by an object with IDs, categories and localized descriptions.
4. Reports - clean, Bootstrap-based html templates with filters, accordions and company branding (because let's face it, good colors make bad news easier to read.)

Then came all the small touches.

  • Test statuses: Ready, Preview, Deprecated - because not everything ages gracefully.
  • Validation logic to catch issues early.
  • Requirement checks for scopes, modules, or permissions - so the script can politely tell you why it refuses to run - or why some tests were skipped.
  • Parallell execution for faster runs (because no one likes waiting for "Test 47 of 132").

Each improvement built on the last, and soon the project started to feel like a living system - not just a pile of scripts.

The unexpected wins

Once the foundation was there, things started evolving fast.
And this is where I really saw the power of combining structure, creativity, and a hint of AI magic.

Some of the best ideas weren't even planned:

  • Team based packs: Each team at our company can now cerate and maintain their own assessments independently.
  • Multilanguage support: English and Swedish built right in.
  • Consistent design: Every report follows the same layout and color theme.
  • Automation hooks: Automatic detection of missing modules or permissions.
  • Portable reports: JSON outputy for future analysis and cross-tool integration.

I started this as a replacement for Maester.
I ended up with something that can scale across teams, service areas, and customers - all while being fun to use and easy to extend.

Reflection - AI as a co-pilot, not a shortcut

Let's get one thing straight:
AI didn't do it all. I still spent plenty of time tweaking code, testing logic, and changing my mind about how something should work.

But it accelerated everything.
It allowed me to move from conecpt to working prototype in days, not weeks or months.
It challenged my assumptions, kept the structure clean, and yes - occassionaly made errors.

Most of the work was done during a time were I wasn't physically well, whatever bug I got it took away a lot of energy, but seeing the quick progress kept me going. Without AI, I wouldn't have accomplished much during these late evenings.

What's next

This engine hasn't gone live, yet. My old solution, that builds on top of Maester is currently in use. But it won't be for much longer, thanks to AI.

Comments

No comments yet. Be the first to share your thoughts!

Related Topics
AIAssessment