top of page

69 items found for ""

  • Pi-hole Ad and Malware Blocker Setup

    Introduction Pi-hole provides numerous benefits as a network-wide ad blocker and privacy tool. It eliminates annoying ads and pop-ups across all devices, resulting in a cleaner and more streamlined browsing experience. By blocking ad-related domains, Pi-hole accelerates webpage loading times, saving bandwidth and reducing data consumption. It also enhances online security by blocking access to malicious domains and preventing tracking and data collection by advertisers. Overall, Pi-hole offers an effective and convenient solution to improve browsing speed, reduce data usage, bolster privacy, and enhance online security and this is a guide on how to setup a pi-hole. EtherApe Using EtherApe, I'm going to demonstrate the effectiveness of Pi-hole on a well established bastion of truth and a British institution (cough) and particularly high in Adverts, the Dailymail. Before the Pi-hole is enabled there's numerous and sustained.... Video pop-ups Header Ads Ads on both sides of the news articles The network noise is... outrageous, both in the number of connections to Ad-sites and the amount of traffic, represented by the heat map. After the Pi-hole is enabled: Video pop-ups - gone Header Ad - gone Ads on both sides of the news articles - gone EtherApe is showing a much calmer heat map with farless outbound connections. Equipment The following equipment is required, mines from Amazon. Raspberry Pi 4 Model B - £97.99 SanDisk 128Gb Extreme microSDXC - $16.99 Raspberry Pi 4 USB-C Power Supply - £11.99 Total £126.17 Raspberry Pi Installation Raspberry Pi makes downloading and burning the image to SSD easy, needing only the Imager executable. Download and install from https://www.raspberrypi.com/software , the wizard will guide you through the burning process. Run the Imager and select Operating System. Select 'Raspberry Pi OS (64-bit)'. Insert the microSSD into the PC and select Storage and then choose the correct storage. Click on the cog: Set credentials, used to manage the pi-hole. Enable SSH Save Click on Write and Yes to the warning message. The writing process takes a while, its exhausting work, go and top up with a coffee. Click continue. If the Format Disk message appears select Canel. Remove the microSD card from the PC and insert it into the Raspberry Pi device. Attach the power and ethernet cables, it will power on automatically. Pi-hole installation There are a couple of options for the initial configuration, including connecting a monitor, keyboard and mouse. I've opted for interrogating DHCP for the IP address of the pi-hole, then reserving. Putty to the to the IP address. Type admin and the password set earlier. The first item on the itinerary is installing the latest patches for Raspberry Pi : sudo apt-get update sudo apt-get upgrade I'm stuck behind a firewall and need to point the pi-hole to an internal timesource. Configure NTP. sudo apt-get install ntp sudo apt-get install systemd-timesyncd sudo nano /etc/systemd/timesyncd.conf NTP=192.168.0.249 To save changes. Ctrl + o (output to file) Ctrl + x (exit file) sudo timedatectl set-ntp true sudo reboot Log back on via Putty Check time sync sudo timedatectl timesync-status Installing Pi-hole is one command, followed by a wizard. curl -sSL https://install.pi-hole.net | bash Click Ok to start the Pi-hole configuration. Read and then click Ok. Continue. Yes to set the current IP address assigned. Ignore, the IP has been reserved in DHCP. Select the preferred DNS server or add custom DNS entries. You may wish to consider doubling up on the DNS filtering with the following free services. OpenDNS provides Family Sheild for blocking adult content: 208.67.222.123 208.67.220.123 Cloudflare provides 1.1.1.1 for Families with the following 2 options Malware Blocking Only: 1.1.1.2 1.0.0.2 Malware and Adult Content 1.1.1.3 1.0.0.3 Yes to install the default block list. Yes to install the Admin Web Interface. Yes to install the pre-requisites. Yes to enable logging. Of course, I want to see everything. Make a note of the Web Admin password and Ok. The Web Admin password will be updated to something more complex later. Pi-hole Configuration Open a browser and enter the IP of the Raspberry Pi, enter the Web Admin password. Clearly, the most important issue to resolve is the interface, go to the Web Interface in Tools and set the Start Trek theme. Pi-hole block lists are extensible, consider adding the following adlists. Don't feel it necessary to add all the lists at once, one at a time and test, some lists may be too restrictive and you'll be forever whitelisting. Adaway Default Blocklist: Blocking ads and known tracking domains. https://adaway.org/hosts.txt OISD: Blocks most Ad, Malware, Porn etc. https://oisd.nl/setup EasyList: A popular list that blocks various types of ads. https://easylist.to/easylist/easylist.txt EasyPrivacy: A list that focuses on blocking privacy-invading trackers. https://easylist.to/easylist/easyprivacy.txt MVPS: Blocks ads, banners, and known malicious sites http://winhelp2002.mvps.org/hosts.txt AD Guard DNS Filter: A DNS filter list by AdGuard that blocks ads and trackers. https://adguardteam.github.io/AdGuardSDNSFilter/Filters/filter.txt Chad Mayfield: Porn Filter https://raw.githubusercontent.com/chadmayfield/my-pihole- blocklists/master/lists/pi_blocklist_porn_all.list Click on ADLists and add the URL's. Pi-hole won't automatically block the additional lists, they require processing. Click on Tools and then Update Gravity and Update. Gravity will require monthly checks as the online lists are amended. Updating the Web Admin Password to something a little more complex via Putty. Login with admin and the initial password set in Imager, then type the following. pihole -a -p Maintenance Updating Raspberry Pi and Pi-Hole is essential for security and stability. Regular updates patch vulnerabilities, protecting against cyber threats. They improve system performance and fix bugs. Every month run the following commands by logging in via Putty and the admin account. Update Raspberry Pi OS apt-get update apt-get upgrade Update Pi-hole pihole -up Update Gravity pihole -g Update the Client's DNS Settings Home User For home users, DNS, the bit that resolves domain names to IP addresses is handled by the router, either BT, Virgin or Sky etc. Due to the different types of router and potential configurations I'm unable to provide clear and concise guidance. The router's DNS settings need updating to that of the IP of the pi-hole. My Setup Meh what can I say, it flips between 2 configurations depending on the cost of energy, my preferred setup is definetly off the cards at this moment. Current config, a pair of Pi-holes act as DNS proxies, with forwarders from the Domain Controllers (DC's). All client resolution is via the DC's. Or my preferred setup. The clients point their DNS to a pair of Pi-holes, these pass any queries on to the DC's and finally proxy out via a pair of synology NAS's. The benefit of this config, the Pi-holes log the clients hostnames. The downside is the cost of running the hardware. Thanks for your time and support by reading this blog. If you found it useful, please share.

  • Understanding Windows 11, TPMs, PCRs, Secure Boot, Bitlocker and Where They Fail

    Understanding Windows 11, TPMs, PCRs, and Security Features Windows 11 requires Trusted Platform Module (TPM) 2.0 as part of its foundation for enhanced security, alongside features like Secure Boot, BitLocker, and Virtualization-Based Security (VBS). With these tools, Microsoft aims to shield devices from evolving threats in an increasingly hostile digital landscape. This article takes a closer look at these features and highlights their limitations, particularly in the context of remote attacks. Trusted Platform Module (TPM): The Basics A Trusted Platform Module (TPM) is a specialized chip designed to enhance security by providing cryptographic operations, safeguarding sensitive data, and ensuring system integrity. It can: Generate, store, and manage cryptographic keys. Validate the integrity of the boot process using Platform Configuration Registers (PCRs). Support security features like BitLocker and VBS. Types of TPM Discrete TPM: A dedicated hardware chip soldered to the motherboard. Firmware TPM (fTPM): Built into the CPU and implemented via firmware. Checking TPM Status Windows Security App: Go to Settings > Privacy & Security > Windows Security, then navigate to Device Security > Security Processor. TPM Management Console: Open the Run dialog, type tpm.msc , and press Enter to check the status and specification version. Command Line: Run tpmtool getdeviceinformation  to retrieve detailed TPM data, TPM Version: The specification version of the TPM (e.g., 2.0). Manufacturer Information: The manufacturer ID and version of the TPM chip. Supported Algorithms: Lists cryptographic algorithms supported by the TPM (e.g., RSA, SHA-256, etc.). PCR Banks: The hash algorithms used for Platform Configuration Registers (PCRs), such as SHA-1 or SHA-256. PCR Information: Indicates which PCRs are active and their supported configurations. TPM Status: The current operational state of the TPM, such as whether it's enabled, activated, or ready for use. PowerShell Cmdlets: Get-Tpm: Displays TPM status and version. Platform Configuration Registers (PCRs): Ensuring Boot Integrity PCRs in the TPM store hashed measurements of the System state during boot, providing a cryptographic log of boot-time events. They ensure integrity by providing a cryptographic record of boot-time events Uses of PCRs Secure Boot: Validates the bootloader, ensuring only trusted code is executed. BitLocker: Uses PCR values to confirm system integrity. Mismatched values (e.g., from tampering) trigger recovery mode. Commonly Used PCRs PCR 0: Measurements from the BIOS, firmware, and Core Root of Trust for Measurement (CRTM). PCR 2: Reflects UEFI Secure Boot state. PCR 4: Tracks bootloader integrity. PCR 7: Represents Secure Boot configuration. What is Secure Boot? Secure Boot, a UEFI feature that ensures only signed and trusted bootloaders are executed during system startup. The TPM strengthens this process by securely measuring and storing key boot components' hashes in its Platform Configuration Registers (PCRs). These measurements create a tamper-proof record of the boot sequence. How Secure Boot Works: Digital Signatures: Each component in the boot chain (e.g., firmware, bootloader) must have a valid digital signature. Key Hierarchies: Platform Key (PK): Authorizes changes to Secure Boot settings. Key Exchange Key (KEK): Manages authorized signatures. Allowed and Forbidden Lists: Specify trusted and untrusted binaries. Secure Boot and PCRs: PCR 7 reflects the Secure Boot state. Tampering with Secure Boot settings results in a different PCR value. Checking Secure Boot Status: Open the System Information tool (msinfo32). Look for Secure Boot State in the report. What is BitLocker? BitLocker is a full-disk encryption feature that leverages TPM to secure data. It ensures that data remains inaccessible if the system is tampered with or the drive is removed. How BitLocker Uses TPM: Stores encryption keys securely in TPM. Validates PCR values during boot. If the values match the expected measurements, the drive is unlocked. Configuring BitLocker: Open Explorer, navigate to C: Right click on C: and select 'Manage Bitlocker. Turn on Bitlocker and follow the prompts. What is Virtualization-Based Security (VBS) and HVCI? Virtualization-Based Security (VBS) uses hardware virtualization to create isolated memory regions for security-critical operations, enhancing system security. VBS Features: Hypervisor-Enforced Code Integrity (HVCI): Ensures only signed and verified drivers and binaries are executed. Relies on TPM for key storage and Secure Boot for integrity validation. Credential Guard: Protects Domain user credentials by isolating LSASS (Local Security Authority Subsystem Service) processes. Enabling VBS: Check hardware support: Virtualization support in BIOS/UEFI. Run msinfo32 and look for "Hyper-V Requirements." Enable VBS: Open Windows Security > Device Security > Core Isolation. Enable Memory Integrity. Verifying VBS Status: Run msinfo32 . Look for Virtualization-Based Security in the report. What These Features Don’t Protect While these tools provide strong defenses against physical tampering, they fall short against remote threats: Credential Theft - VBS's Credential Guard protects domain credentials but doesn’t secure local account credentials, which can be dumped from memory. Additionally, techniques like pass-the-hash allow attackers to use stolen hashes without decryption. Application Exploits - TPM protections don’t block malware that exploits software vulnerabilities. Attackers can bypass these defenses by targeting unpatched applications. Hardware-Level Attacks - Physical attacks on the Low Pin Count (LPC) bus could extract BitLocker keys if no PIN is used. Network-Based Attacks - Features like Secure Boot and TPM don’t address phishing, network infiltration, or lateral movement. Building a Comprehensive Security Strategy To address these gaps, organizations should bolster TPM-based features with additional measures: Application Control - Tools like Windows Defender Application Control (WDAC) enforce strict policies, blocking unauthorized applications and malware. Regular Patching - Keeping systems and applications up-to-date mitigates risks from known vulnerabilities. Multifactor Authentication (MFA) - Adds a layer of protection against credential theft and unauthorized access. Endpoint Detection and Response (EDR) - Monitors for suspicious activity and stops advanced attacks. The Takeaway Windows 11’s TPM-centric security features excel at defending against physical attacks, but they can’t stop remote exploits, credential theft, or network-based threats on their own. Think of them as a sturdy lock—effective at preventing break-ins, but not enough if attackers exploit the open Window. A layered security approach is essential to stay ahead of sophisticated threats.

  • Bitlocker a Closer Look

    In my previous blog , I explored how Microsoft leverages the Trusted Platform Module (TPM) to secure Windows 11. In this article, we’re going to take a deeper dive into BitLocker. What is Bitlocker BitLocker is a full disk encryption feature integrated into Microsoft Windows, designed to safeguard the integrity and confidentiality of data. By encrypting the system drive, BitLocker ensures that unauthorized users cannot access sensitive information, even if they gain physical access to the hardware. A core part of BitLocker’s security lies in the use of the Trusted Platform Module (TPM), which securely stores cryptographic keys needed to decrypt the data. Key Concepts in BitLocker Encryption Before diving into the workings of the private key and AES or XTS-AES, let's briefly define some of the key components involved in BitLocker’s encryption process: Full Volume Encryption Key (FVEK): The FVEK is the primary encryption key used by BitLocker to encrypt and decrypt the entire volume (the disk or partition). It is a symmetric key, meaning the same key is used for both encryption and decryption. This key is essential for protecting the actual data stored on the drive. Trusted Platform Module (TPM): The TPM is a hardware chip embedded in most modern computers that provides secure storage for cryptographic keys and ensures that the system's boot process has not been tampered with. It is used in conjunction with BitLocker to protect the FVEK and to prevent unauthorized access to encrypted data. Password/PIN: A password or PIN is an optional but highly recommended security measure that adds an extra layer of authentication for unlocking the encrypted drive. This PIN/password is needed in addition to the TPM’s cryptographic keys to unlock the system during boot. Adding a PIN/password mitigates the Low Pin Count (LPC) Bus attack, Recovery Key: If the TPM or PIN is unavailable (for example, if the hardware is replaced), BitLocker provides a recovery key, which is a 48-digit alphanumeric key. This recovery key is essential for unlocking the encrypted drive in such cases. How BitLocker's Private Key Works The concept of a private key in BitLocker differs from that of traditional asymmetric encryption, where two keys (a private key and a public key) are used. BitLocker uses symmetric encryption for disk encryption, meaning it uses a single key (the Full Volume Encryption Key) for both encryption and decryption. However, BitLocker’s security is strengthened by using the TPM and other factors (such as a PIN or password) to protect access to the Full Volume Encryption Key (FVEK). The private key in this context is tied to the TPM and is crucial for managing access to the FVEK. Here’s how it all works in detail: Generation of the Full Volume Encryption Key (FVEK) When BitLocker is first enabled on a system, the FVEK is generated. This key is used to encrypt the entire disk or volume. However, to protect this key, it cannot be stored on the disk in plain text. Instead, it is stored securely using the Trusted Platform Module (TPM). TPM and the Protection of the Private Key The TPM plays a central role in BitLocker’s encryption system. It is a hardware-based security chip that is embedded in many modern systems to provide tamper-resistant storage for cryptographic keys. The TPM protects the FVEK by encrypting it with a TPM-specific key, which is known as the TPM’s Endorsement Key (EK). This key is unique to the TPM and cannot be extracted by unauthorized parties, even if the hard drive is removed from the system and connected to another computer. Here’s how the process works: Encrypting the FVEK: When BitLocker is enabled, the FVEK is encrypted with the TPM’s key (which is securely stored in the TPM chip itself). Storing the Encrypted FVEK: The encrypted version of the FVEK is stored in the system’s memory and on the disk. However, it cannot be decrypted without the TPM and proper authentication (such as a PIN, password, or recovery key). Unlocking the Encrypted FVEK: Upon system startup, the TPM checks the system’s configuration, including the integrity of the BIOS, bootloader, and other critical boot components. If any changes are detected (for example, due to a malware attack or hardware change), the TPM will refuse to release the FVEK, thus preventing unauthorized access to the encrypted data. Releasing the FVEK: If the TPM verifies that the system configuration is unchanged and trusted, it will decrypt the FVEK and pass it to the system. This is the moment when the encryption key becomes available to decrypt the data on the disk. At this point, the system can proceed with loading the operating system and allowing the user to interact with their data. AES-256 vs. XTS-AES-256: The Encryption Methods BitLocker can use different encryption algorithms, and understanding the difference between AES-128, AES-256, XTS-AES-128 and XTS-AES-256 helps in understanding how BitLocker protects your data. In the context of this article AES-128 and XTS-AES-128 will be ignored. Both AES-256 and XTS-AES-256 are symmetric encryption algorithms, meaning they use the same key for both encryption and decryption, but they differ in how they operate and the level of protection they offer. AES-256 AES (Advanced Encryption Standard) is a widely-used encryption standard that provides strong encryption capabilities. The "256" in AES-256 refers to the length of the key used in the encryption process: 256 bits. AES-256 works by encrypting the data in fixed-size blocks (128 bits) using a key that is 256 bits long. While AES-256 is secure and resistant to brute-force attacks, the challenge with traditional AES encryption lies in the potential vulnerabilities in how it handles block ciphers. Specifically, in the case of full-disk encryption, AES-256 does not account for the fact that some patterns might emerge within the plaintext data as it’s encrypted. This is where XTS-AES-256 comes in. XTS-AES-256 XTS-AES-256 (or XEX Tweakable Block Cipher with Ciphertext Stealing) is an enhanced version of AES-256 specifically designed for disk encryption. While it uses the same AES-256 algorithm, it introduces a second key and modifies the way the encryption is applied to improve security, especially against attacks on the underlying disk encryption. XTS-AES-256 employs tweaking as part of its encryption process. It uses a tweak value to change how each block is encrypted, preventing certain patterns or structures in the encrypted data from being exploited. This makes it significantly harder for attackers to perform certain types of cryptanalysis on the encrypted data, particularly in full-disk encryption scenarios. For BitLocker, XTS-AES-256 is the preferred encryption method because it is specifically designed for disk encryption and provides stronger protection in that context. Adding a PIN or Password In addition to the TPM’s encryption of the FVEK, BitLocker can also be configured to require an additional authentication factor, such as a PIN or password. This adds another layer of security, ensuring that the FVEK is not released even if the TPM is bypassed. Here ’s how the process works when a PIN is added: PIN Encryption: The PIN is combined with the TPM’s key and a unique public key to create a secure, trusted boot environment. This combination of the TPM’s key and the user-supplied PIN ensures that the encrypted disk remains inaccessible without both the physical TPM key and the correct PIN. Decryption of the FVEK: The TPM will release the encrypted FVEK only if the correct PIN is entered at boot. Without the correct PIN, even if an attacker has physical access to the machine, they cannot decrypt the FVEK and thus cannot access the data on the drive. How the LPC Bus Can Compromise the TPM The LPC bus operates as a communication channel between the TPM chip and the Southbridge, and indirectly to the Northbridge or CPU. Since this bus was not originally designed with modern security threats in mind, it lacks encryption or robust protection mechanisms. Enhancing Security with a PIN To mitigate the risk of LPC bus attacks, BitLocker allows the use of a PIN as an additional authentication factor. Here’s how it works: User Input Required: Before the decryption process begins, the user must enter a PIN. This adds an extra layer of security beyond the TPM’s PCR-based integrity checks. Secure Key Unsealing: The TPM uses the correct PIN to unlock the private key. Without the PIN, the private key remains sealed, even if an attacker has access to the LPC bus. Protection Against Physical Attacks: Since the PIN is not transmitted over the LPC bus, it cannot be intercepted. This makes it effective against attacks that exploit the LPC bus to extract the private key. Recovery Key In case the TPM is unable to release the FVEK (for instance, if hardware is changed or the TPM’s configuration is corrupted), BitLocker allows users to unlock the drive using a recovery key. This recovery key is typically a 48-digit alphanumeric code that can be used to manually unlock the drive when other authentication methods fail. The recovery key can be stored in various ways: Saved to a USB drive. Printed out and stored in a secure location. Stored in a Microsoft account or Active Directory for enterprise users. If the TPM does not release the FVEK during boot, the system will prompt the user to enter the recovery key, allowing access to the encrypted disk. Conclusion BitLocker, when used with the TPM and XTS-AES-256 encryption, provides a highly secure solution for protecting data at rest. The TPM ensures that the decryption key is securely stored and not easily extracted, while XTS-AES-256 improves the security of full-disk encryption by mitigating the risk of attacks that exploit patterns in the encrypted data. Incorporating a PIN into the BitLocker setup, along with TPM and XTS-AES-256 encryption, provides the highest integrity for securing sensitive data and protecting against a wide range of potential threats.

  • When a Microsoft Engineer Meets Open Source: Deploying VS Code on Rocky Linux with Ansible.

    Ah, the irony. Here I am, a proud Microsoft engineer, wielding Ansible—a shining beacon of open-source automation—to deploy Microsoft's beloved Visual Studio Code on Rocky Linux. As a Microsoft engineer, one might assume my life revolves around the infinite loop of Windows, Azure, and—let’s be honest—occasionally cursing at Intune while sipping lukewarm coffee. Despite a lifetime using Microsoft’s polished GUIs and enterprise-grade everything, sometimes it’s just fun to roll up our sleeves and embrace the gritty beauty of YAML. It’s an engineer’s rite of passage to wrestle with variables, play whack-a-mole with failed dependencies, and eventually bask in the glory of “PLAY RECAP: SUCCESS.” So why Rocky Linux? Why Ansible? Because, in the spirit of open source, we go where the community goes. And because, as much as I love PowerShell, sometimes you just want to let Linux do its thing. Let’s dive in and show the world that even a Microsoft engineer can deploy Microsoft software with an open-source tool on a Linux distro. Spoiler alert: It’s actually kind of awesome. Pre-Requisites Steps Before diving into Ansible, we have set up three Rocky Linux virtual machines, each configured with 2 CPUs and 4GB of RAM. Rocky Linux Nodes rocky01 = 192.168.0.28 - Ansible Controller rocky02 = 192.168.0.38 - Dev Node 01 rocky03 = 192.168.0.39 - Dev Node 02 Create an Admin User During the setup, each node was configured with a user account named 'user' that has administrator privileges. If root was used instead, create an account with the following configuration: sudo root sudo adduser user sudo passwd user sudo usermod -aG wheel user Install SSH on Dev Nodes (02-03) SSH to each of the Dev nodes ssh user@192.168.0.38 ssh user@192.168.0.39 Install openssh-server sudo dnf install openssh-server Create a Public\Private Key on the Ansible Controller Generate an SSH key using the user account. ssh-keygen -t ed25519 -C "ansible controller" Either provide a file name or use the default option. If you choose to specify a file name, ensure you include the full path. For best practice, enter a password. However, pressing Enter without typing anything will leave the password blank. ssh-keygen: This is the command used to generate, manage, and convert SSH keys. -t ed25519: Specifies the type of key to create. ed25519 is an elliptic-curve signature algorithm that provides high security with relatively short keys. It is preferred for its performance and security over older algorithms like rsa or dsa. -C "ansible controller": Adds a comment to the key. This comment helps identify the key later, especially when managing multiple keys. In this case, the comment is "ansible controller", which likely indicates that the key will be used for an Ansible control node. List the contents of the .ssh directory. The .pub file contains the public key, which is to be shared with other nodes. ls -la .ssh Copy the Public Key to the Dev Nodes Use the ssh-copy-id command to copy the public SSH key to the Dev nodes, enabling passwordless authentication. This command appends the public key to the ~/.ssh/authorized_keys file on the target node, ensuring secure access. For example: This process requires the target node's password for the first connection. Afterward, the SSH key allows secure, passwordless logins. ssh-copy-id -i ~/.ssh/id_ed25519.pub user@192.168.0.38 ssh-copy-id -i ~/.ssh/id_ed25519.pub user@192.168.0.39 Test the connection to each Dev node. ssh -i ~/.ssh/id_ed25519 user@192.168.0.38 ssh -i ~/.ssh/id_ed25519 user@192.168.0.39 Install Ansible on the Controller Node Set up Ansible on the Ansible Controller node by executing the following commands; sudo dnf updates sudo dnf install epel-release sudo dnf install ansible Copy Playbook from Github Clone the GitHub repository and move it to /home/user/ansible-vsc. git clone https://github.com/Tenaka/ansible_linux_vcs.git mkdir ansible-vcs mv ansible_linux_vcs/* ~/ansible-vcs cd ansible-vsc Keep in mind that ~ refers to the home directory in Linux. tree A Quick Review of the Playbook Some amendments to the inventory.txt file is probably needed, so I'm using nano as the text editor and steering clear of vi—there's only so much this MS Engineer is willing to embrace. Ansible.cfg defines the settings for this ansible playbook: inventory = Specifies the inventory file (inventory.txt) that contains the list of hosts Ansible will manage. private_key_file = ~Indicates the path to the private SSH key (~/.ssh/ided25519) used for authenticating to remote hosts. ~/ansible-vsc/ansible.cfg [defaults] inventory = inventory.txt private_key_file = ~/.ssh/ided25519 ~/ansible-vsc/inventory.txt [all] 192.168.0.28 192.168.0.38 192.168.0.39 [visualstudio] 192.168.0.38 192.168.0.39 ~/ansible-vsc/visualcode.yml --- - hosts: all become: true roles: - baseline - hosts: visualstudio become: true roles: - visualstudio ~/ansible-vsc/roles/visualstudio/tasks/main.yml - name: Add Microsoft GPG key rpm_key: state: present key: https://packages.microsoft.com/keys/microsoft.asc - name: Add Visual Studio Code repository yum_repository: name: vscode description: "Visual Studio Code" baseurl: https://packages.microsoft.com/yumrepos/vscode enabled: yes gpgcheck: yes gpgkey: https://packages.microsoft.com/keys/microsoft.asc - name: Install Visual Studio Code yum: name: code state: latest #Dont run as root and install extensions - name: Install desired VS Code extensions become: false shell: "code --install-extension {{ item }} --force" loop: - redhat.ansible - redhat.vscode-yaml register: vscode_extensions changed_when: "'already installed' not in vscode_extensions.stdout" - name: Display installed extensions debug: msg: "Installed extensions: {{ vscode_extensions.results | map(attribute='item') | list }}" While VSC is installed using sudo, installing extensions with elevated privileges does cause issues. Therefore, become is set to false. Deployment of Visual Studio Code Make sure to run the playbook from the ~/ansible-vsc directory. The command ansible-playbook --ask-become-pass visualcode.yml runs the Ansible playbook visualcode.yml with the following options: --ask-become-pass: Prompts you to enter a password for elevated (sudo) privileges on the target hosts. visualcode.yml: Specifies the playbook file to be executed. ansible-playbook --ask-become-pass visualcode.yml Enter the password at the prompt and sit back whilst ansible does all the work. In Ansible playbook output, 192.168.0.38 had previously been successful in deploying VSC during testing: changed: Indicates that a task made modifications to the target system. ok: This means that the task has successfully completed without making any changes. This often happens when the system is already in the desired state, such as when a package is already installed or a configuration file is already correct. Of course, these Linux boxes have a GUI installed—I'm an MS Engineer, and it's required for VSC. So login to each of the Dev nodes and launch VSC. After rolling up my sleeves and diving headfirst into the untamed wilderness of Linux, this Microsoft engineer emerged with calloused hands, and a newfound love for ansible. Sure, there were battles with YAML, was that 3 or 4 spaces, but every “PLAY RECAP: SUCCESS" felt like a badge of honor. And while I still instinctively reach for the Reboot button at every minor annoyance, I now pause a second or two to consider if the reboot is the correct course of action. Of course it is, it's the only action that works.

  • The Case for Cloud Repatriation, is it a case of Buyers Remorse.

    Cloud Repatriation: Navigating the Evolving Landscape of Cloud Computing   As cloud computing has become integral to modern business, the promise of scalable, flexible, and cost-effective infrastructure has driven widespread adoption. However, a new trend is emerging: cloud repatriation—the strategic shift of some workloads back to on-premises or private cloud environments. This trend highlights that while public cloud platforms like AWS, Microsoft Azure, and Google Cloud offer significant benefits, certain workloads or organizational needs may be better suited to alternative setups.   In this post, we’ll delve into the reasons behind cloud repatriation, the benefits organizations are experiencing, and how top cloud providers are responding to evolving needs by offering solutions for hybrid and multi-cloud environments.   Understanding Cloud Repatriation   Cloud repatriation refers to the migration of workloads, applications, or data from public cloud services back to on-premises infrastructure, private clouds, or hybrid environments. Initially, businesses embraced the cloud due to its flexibility and potential for cost savings. But as these companies gained experience, some have found that certain workloads may perform better or be more cost-effective outside of the public cloud.   Key Reasons Driving Cloud Repatriation   Cost Optimization: While the cloud's pay-as-you-go model can reduce upfront costs, high-consumption or long-term workloads often lead to rising operational expenses. With high storage or egress fees, organizations sometimes find repatriating workloads a more cost-effective option. According to IDC, the cloud market is expected to reach $800 billion in 2024, with services like AWS, Azure, and Google Cloud capturing a significant share through specialized offerings【9†source】. However, businesses are increasingly scrutinizing these costs, especially for stable workloads that may be more economical on-premises.   Performance and Latency: For applications with stringent latency requirements, such as real-time manufacturing systems or financial applications, on-premises infrastructure often provides a performance advantage. Edge computing is another alternative for applications where local data processing is essential. Companies using Google Cloud's edge computing solutions or Azure’s hybrid cloud capabilities can still leverage the cloud for scalability while maintaining low-latency processing through on-premises resources.   Data Security and Compliance: Industries like healthcare and finance often operate under strict regulatory requirements that can make public cloud use complex. For instance, data sovereignty laws require certain data to remain within national borders, making it challenging to store this information across global data centers. In response, AWS, Google Cloud, and Azure offer specialized compliance tools and region-specific data residency options to support secure and compliant cloud storage.   Vendor Lock-In Concerns: The desire to avoid dependency on a single provider has driven many organizations to pursue hybrid or multi-cloud strategies, enabling more flexibility. Cloud providers use proprietary tools and services that can be challenging to migrate, leading some companies to repatriate to prevent being locked into one ecosystem. Solutions like Azure Arc, Google Anthos, and AWS Outposts are designed to alleviate these concerns by enabling interoperability across environments, supporting a more integrated approach.   Customization and Control: Public clouds are generally built for broad usability, which can limit customization. Businesses with specific needs, like specialized hardware for high-frequency trading, can achieve more tailored setups in on-premises environments. Public cloud providers like AWS have introduced features that support unique configurations. However, full control is sometimes only achievable through a private or hybrid setup.   Long-Term Stability and Predictability: Public cloud platforms often undergo frequent updates, which can impact stability for core business applications. For stable and predictable environments, maintaining hardware on-premises allows organizations more control over changes and configurations.   Benefits of Cloud Repatriation   Cost Efficiency: By moving steady or high-usage workloads on-premises, businesses can better manage costs by avoiding recurring fees. Improved Performance: Organizations benefit from reduced latency and improved control for performance-critical applications. Enhanced Security: Cloud repatriation offers companies tighter control over data security, a significant advantage for industries with stringent data protection requirements. Infrastructure Control: A customized IT environment allows for specific hardware and software configurations not possible in the standardized public cloud. Reduced Vendor Dependency: Repatriation enables businesses to adopt a multi-cloud or hybrid strategy, reducing dependency on a single provider.   The Hybrid Model: Cloud Providers’ Response   Top cloud providers are recognizing the need for hybrid and multi-cloud strategies to accommodate these evolving business needs. Here’s how each provider is addressing this trend:   Amazon Web Services (AWS): AWS Outposts extends AWS infrastructure on-premises, allowing companies to run AWS services locally while maintaining a consistent experience with the public cloud. This approach enables AWS customers to leverage a hybrid model without complete reliance on the public cloud.   Microsoft Azure: Azure Arc allows businesses to manage resources across on-premises, multi-cloud, and edge environments, providing a comprehensive solution for businesses aiming to avoid vendor lock-in. Azure also offers Hybrid Benefit, which can help optimize costs by leveraging existing on-premises licenses.   Google Cloud: Google Anthos is designed for multi-cloud and hybrid deployments, allowing applications to be deployed across on-premises, Google Cloud, and other providers, creating an environment that offers flexibility and choice.   These hybrid and multi-cloud offerings demonstrate that leading cloud providers understand the diverse requirements of modern enterprises and aim to support flexible strategies that balance performance, cost, and regulatory compliance.   Final Thoughts   Cloud repatriation doesn’t signal the end of cloud computing but represents a move toward a balanced approach that leverages both on-premises and cloud resources. With the global cloud market projected to exceed $800 billion in 2024, organizations are carefully evaluating their options to optimize infrastructure for performance, security, and cost-effectiveness.   For many, a hybrid or multi-cloud model represents the best of both worlds, enabling businesses to retain control over critical workloads while benefiting from the cloud’s scalability. Ultimately, the decision to repatriate workloads or adopt a hybrid strategy should align with each organization’s unique goals, ensuring a cost-effective, secure, and high-performance infrastructure.

  • Securing Weak File, Folder and Registry Hive Permissions.

    In this blog, we'll examine how threat actors—often referred to as hackers—can escalate privileges when weak file, directory, or registry permissions are present. Many programs disable directory inheritance or assign excessive permissions to user accounts, leading to vulnerabilities. Finding these misconfigurations can be challenging, as it involves reviewing extensive file, directory, and registry hive permissions that are often overlooked. Fortunately, I have a few scripts that help detect and report these vulnerabilities and can also reset permissions to their secure defaults. But first, let’s dive into the problem at hand... The Risks Here's a revised version of the text with your requested additions: "Improperly configured permissions for files, directories, and registry entries often create significant vulnerabilities that threat actors can exploit to escalate privileges or break out of restricted environments. When permissions are inadequately set, threat actors can gain access to or modify sensitive files, ultimately providing a pathway for unauthorized actions. Weak permissions enable unauthorized users to write and execute programs in specific directories or modify registry application paths, allowing them to redirect these paths to malicious locations. This redirection enables threat actors to inject and run their own code, giving them access to sensitive information or control over existing applications and files. Beyond simply executing programs, insecure directory permissions also allow unauthorized modification of file permissions. This level of access can be used to alter or delete important files or to introduce new files containing harmful code. Finally, these weak permissions open doors for attackers to leverage vulnerabilities within the operating system or its applications, allowing further access to the system. Additionally, unquoted paths and services with insufficient security configurations provide additional avenues for exploitation, allowing attackers to execute unauthorized commands and compromise system integrity." What to do.... Manually validating permissions across the operating system can be a slow and tedious process. After discovering some critical permission issues and recognizing the importance of thorough validation, I began developing a script for automated validation and pentesting. This script is available for download on GitHub, with all relevant links provided at the bottom of the page. The Scripts The Security Report Support Page Fix for Weak Permissions Fix Unquoted Paths

  • Windows AutoPilot Device Preparation

    Windows Autopilot's Device Preparation is it's new 'user-driven' workflow. Instead of IT staff registering all devices prior to giving them over to staff there's the option for the device to be shipped directly from an OEM to the end-user. With minimal steps—powering the device, selecting locale, connecting to Wi-Fi, and signing in with Microsoft Entra credentials—the system automates the rest. The device automatically joins Microsoft Entra ID, enrolls in Intune, installs key apps, and runs essential scripts, streamlining setup for users while reducing IT workload. Key Features: The device joins Microsoft Entra ID. Intune enrollment with preconfigured policies. Automated installation of up to 10 essential apps and PowerShell scripts. This article covers the configuration steps for setting up Windows Autopilot device preparation using a user-driven Microsoft Entra join workflow. Requirements: Windows 11, version 23H2 with KB5035942 or later. Windows 11, version 22H2 with KB5035942 or later. Enrollment Config - Entra Navigate to Entra with the following URL, allowing users to enroll devices. https://portal.azure.com/#home Then to Device Settings, Microsoft Entra ID > Devices (left hand Window) > Device Settings. Allow 'All' users to join devices Enrollment Config - Intune Now navigate to Intune to configure the MDM User scope. https://intune.microsoft.com/#home Then to, Devices > Enrollment > Automatic Enrollment Select 'All' for the MDM User Scope. User and Device Group A couple of Groups will be required to allow named Users the ability to enroll devices and for the Devices themselves. From within Intune navigate to Groups. Create a Security Group with a name that reflects its purpose eg: AutoPilot_DevicePrepartion_Users. Add named users or all users to this group. Create a 2nd Security Group for devices, don't add any members. Modify the Device Groups Owners. Add the built-in service, provided by Microsoft 'Intune Provisioning Client' as the owner. This will provide the 'Just in Time' rights for device auto enrollment. AutoPilot Device Preparation Navigate to Devices, Windows, Enrollment. Select 'Device Preparation Policies'. Provide a Name. Add the 'AutoPilot_DevicePreparation_Device' Group. Under Configuration Settings leave the defaults. I've added some Apps and scripts, the maximum is 10. For Applications to install the user must be a member of the deployment group. Add the 'AutoPilot_DevicePrepartation_Users' group, these can be users who are part of the IT team that adds devices to Intune or all users. Save Deployment Sign in with an approved account, then sit back while the magic happens Links: https://learn.microsoft.com/en-us/autopilot/device-preparation/tutorial/user-driven/entra-join-device-group

  • Understanding Windows File Altitude: A Deep Dive into File System Filter Drivers

    When delving into the intricate workings of Windows file system architecture, one of the more technical concepts that often emerges is file altitude. If you’ve ever explored file system filter drivers or engaged in low-level system development, understanding this concept is crucial. This blog aims to break down the complexities of Windows file altitude, the role it plays in the kernel, and how it affects file system operations.   What is a Windows File System Filter Driver?   Before diving into file altitude, it’s essential to understand the role of file system filter drivers. In Windows, a filter driver operates within the kernel mode and can monitor, modify, or extend the functionality of file system operations. These drivers can be inserted into the I/O request path, between the application and the underlying file system, to intercept and possibly modify file operations such as read, write, and delete requests.   File system filter drivers are typically used for:   Antivirus solutions: to monitor and block malicious activities. File encryption or compression: to apply encryption or compression on the fly. Backup solutions: to intercept and manage file access for consistent backups. File system auditing or monitoring: for logging file system activities or imposing policies.   Introducing the Concept of File Altitude   In a system where multiple filter drivers are installed, there needs to be a way to define their order of operation. This is where altitude comes into play. Simply put, file altitude is a numerical value that dictates the position of a filter driver within the file system stack. The higher the altitude, the closer a driver is to the application layer (and further from the actual file system).   Windows ensures that these altitudes are registered and properly sequenced to avoid conflicts between drivers that might need to operate in a specific order.   How Altitude Works   Imagine a scenario where multiple drivers are installed for various purposes (e.g., an antivirus, a backup tool, and a logging tool). These drivers all want to interact with I/O requests. Without an ordering mechanism, there could be conflicts:   An antivirus might want to inspect a file before any backup software reads it. The backup software might need to know the original state of a file before encryption is applied.   Altitude values help resolve this by assigning each filter driver a priority based on its altitude. Windows ensures that the drivers with the highest altitudes receive I/O requests first, while those with lower altitudes are closer to the file system (and see the request last).   Altitude Numbering System   The altitude value is a floating-point number ranging from 0.000000 to 999999.999999. By convention, the lower the altitude number, the closer the driver is to the file system itself, and the higher the number, the closer it is to user mode operations.   Upper-range altitudes (e.g., 380000-499999) are typically reserved for drivers like encryption and compression tools that need to operate closer to user-mode applications. Middle-range altitudes (e.g., 200000-379999) are often used by antivirus software, which needs to filter I/O requests before they reach the disk. Lower-range altitudes (e.g., 0-199999) are usually occupied by drivers that need to interact closely with the file system itself, such as volume managers and file system encryption.   Each filter driver registered with the system must provide a unique altitude to prevent collisions or ordering issues.   Managing Altitude in Windows   The Windows OS provides a centralized mechanism for managing filter driver altitudes. Filter Manager, a built-in component of Windows starting from Windows Server 2003, facilitates the registration and sequencing of these filter drivers. It ensures that drivers operate in the correct order based on their altitude, preventing lower-altitude drivers from inadvertently disrupting higher-altitude ones.   Querying Altitude   You can query a system's file system filter driver altitudes using the `fltmc` utility in the command prompt. This utility displays loaded filter drivers, their altitudes, and their current operational state.   fltmc filter   The output of this command might look like: Registering a Driver with an Altitude   When developing or installing a new file system filter driver, you need to register the driver with an appropriate altitude to ensure that it functions correctly within the filter stack. The driver installation process typically handles this via INF files or registry entries.   Altitudes are not chosen arbitrarily; they are managed and assigned by Microsoft. Developers must register for an altitude by contacting Microsoft’s filter manager team to ensure that no two drivers conflict by using the same altitude.   Handling Altitude Conflicts   Altitude conflicts can arise when two or more drivers attempt to register for the same or similar altitudes, especially if one driver isn’t aware of the other. If a conflict occurs:   It can lead to unpredictable system behavior, including I/O request handling errors. In worst-case scenarios, it could result in BSODs (Blue Screens of Death) due to improper sequencing of I/O operations.   By adhering to the altitude registration process, conflicts are minimized. The filter manager enforces altitude uniqueness to prevent these kinds of operational failures.   Practical Example: Antivirus and Backup Solutions   Consider a scenario where an antivirus solution and a backup tool are installed on the same machine:   Antivirus Filter: This filter driver operates at an altitude of 350000. When an application requests to read or write a file, the antivirus filter intercepts the request first. It scans the file for malicious content before passing it down the stack.   Backup Filter: This filter driver is at altitude 250000. After the antivirus completes its scanning, the request moves to the backup filter, which monitors the file for any changes, making a backup copy if necessary.   File System Operations: Finally, the request is passed down to the actual file system, which handles the physical read or write operations.   Without the correct altitude order, the backup software might try to back up a file before it has been scanned by the antivirus software, potentially saving a corrupted or infected file.   Conclusion   In summary, file altitude is a critical mechanism in the Windows file system architecture that governs the order in which filter drivers process I/O requests. By assigning a specific altitude to each filter driver, Windows ensures that drivers operate in the correct sequence, minimizing conflicts and ensuring the integrity of file system operations. Whether you're developing file system tools or managing enterprise-level systems, understanding and properly handling file altitude is crucial for maintaining system stability and security.

  • Staying Safe on the Internet: Essential Tips for Protecting Yourself Online

    These days, the internet is such a big part of our daily lives. Whether we’re banking, chatting with friends, shopping, or learning something new, we’re always online. While it opens up a world of possibilities, it also comes with risks to our personal info, privacy, and security. As cyber threats keep evolving, it’s more important than ever to know how to stay safe online. Let’s go over a few simple tips to help you protect yourself while navigating the Internet. Use Strong, Unique Passwords Your password is your first line of defense against unauthorized access. Make sure it’s strong and unique. A good password should: Reusing passwords across multiple accounts can put you at significant risk because hackers can exploit this practice. When a company or service is hacked, user data, including usernames and passwords, can be stolen. These credentials are often sold or shared on the dark web or hacker forums. Even if only one account is compromised, reusing the same password across different accounts can have a ripple effect. Be at least 12 characters long. Include a mix of uppercase and lowercase letters, numbers, and symbols. Avoid easily guessable words like "password" or personal information such as your name or birthday. Change your passwords every 6 to 12 months. Tip: Consider using a password manager to store and generate secure passwords. Enable Two-Factor Authentication (2FA) Two-factor authentication adds an extra layer of security by requiring a second form of verification in addition to your password. This could be a code sent to your phone, a fingerprint, or facial recognition. Even if someone has your password, they won’t be able to access your account without the second factor. Tip: Use the Google Authenticator App Keep Software and Devices Updated Cybercriminals often exploit vulnerabilities in outdated software to gain access to your devices. Regularly updating your operating system, apps, and antivirus software helps protect against these vulnerabilities. Enable automatic updates on your devices to ensure you always have the latest security patches. Remove infrequently or unused Apps from your phone. Be Smart with Downloads Downloading software or files from untrusted websites can expose your device to malware. Only download apps from official stores (such as Google Play or the Apple App Store) and avoid pirated content. Malware can steal sensitive information or even hold your device hostage (ransomware). Tip: Ensure all devices have Anti-Virus Software. Be Cautious with Public Wi-Fi Public Wi-Fi networks, like those in cafes or airports, can be convenient but risky. Hackers can intercept your data if you’re not careful. Avoid accessing sensitive accounts (such as banking or email) over public Wi-Fi without using a virtual private network (VPN). A VPN encrypts your data and adds an extra layer of protection. Beware of Phishing Scams Phishing scams are attempts by cybercriminals to trick you into revealing personal information by pretending to be someone trustworthy, such as a bank or a colleague. These scams often come in the form of emails or text messages that contain malicious links or attachments. How to Avoid Phishing Don’t click on links or download attachments from unknown senders. Verify the sender’s email address and look for suspicious grammar or spelling errors. If you receive a suspicious email from a legitimate organization, contact them directly using verified contact information. Use Privacy Settings On social media platforms and other online services, take the time to review and adjust your privacy settings. Limit the amount of personal information you share publicly, and ensure that only trusted individuals can view your private details. Many websites and apps track your online activity, so disabling tracking features can improve your privacy. Secure Your Home Wi-Fi Network Your home Wi-Fi network is the gateway to all of your internet-connected devices. To protect it: Change the default router password to something strong and unique. Use WPA3 or WPA2 encryption. Hide your network by disabling SSID broadcasting. Enable a guest network for visitors, so they don’t have access to your main devices. Monitor Your Online Accounts Regularly monitoring your accounts can help you spot suspicious activity early. Many online services offer notifications for unusual activity, such as login attempts from unknown devices. If you notice anything out of the ordinary, change your password immediately and report the issue to the service provider. Tip: Set up account activity alerts where possible to stay informed of any unusual actions. Educate Yourself The digital world is constantly evolving, and so are the threats. Staying informed about the latest online security trends can help you avoid falling victim to new scams or vulnerabilities. Follow trusted security blogs, attend webinars, and consider taking online courses to enhance your knowledge of cybersecurity. Conclusion By practicing these habits, you can significantly reduce the risk of falling victim to cyber-attacks. Staying safe on the internet requires vigilance, but by taking the right precautions, you can enjoy the benefits of the digital world with peace of mind. Protect your personal information, stay alert to potential threats, and always prioritize your online safety.

  • Quick Guide for Intune's Autopilot

    Intune's Autopilot automates the configuration and setup of new devices, allowing users to start working with pre-configured settings, applications, and security policies as soon as they power on their device. In this blog, we’ll explore how Microsoft Intune Autopilot works, let's get started. Dynamic Group for Deployment Profile From within Intune, browse to Groups and then click on New Group. To ensure that every newly registered device is associated with Autopilot automatically you need to first create a dynamic Azure AD (Entra) Security Group. Edit the Dynamic Query, then paste the following string and Save. (device.devicePhysicalIDs -any (_ -startsWith "[ZTDid]")) Enrollment Configuration From within Intune, browse to Devices, Windows, then Enrollment. Device Platform Restrictions Intune Device Platform Restrictions controls which types of device can access organizational resources based on their platform (e.g., Windows, iOS, Android, macOS). This feature helps enhance security by limiting access to only approved device types and blocking untrusted or unsupported platforms. This step isn't necessary for Autopilot to work as the default is to allow all devices, however we will block Windows Personally owned devices. Click on 'All Users' link. Change Personally owned devices for Windows (MDM) to Block. Deployment Profiles Autopilot deployment profiles in Microsoft Intune are configuration templates that define how new devices are set up and managed during the out-of-box experience (OOBE). These profiles allow automated and customizable deployment processes, specifying settings like Azure AD join type, user-driven or self-deploying mode. Navigate to Deployment Profiles within the Enrollment tab, then select Create Profile. Provide name and select Yes for 'Convert all targeted devices to Autopilot', this enables all non-Autopilot, or current members of Entra to become Autopilot registered when they are assigned to the profile group. Select User-Driven and any other pertinent settings. Assign the Windows Autopilot group created earlier and then save the changes. That covers the basics of configuring auto enrollment. I'll skip the Enrollment Status Page for now, as it's not essential for this introductory guide. Enrollment of a Device For the purposes of this blog, a Windows 11 23H2 OS has been installed on Hyper-V, and the setup has been progressed to the Region selection page. Press Shift & F10 for an Administrative shell Type the following to download the Autopilot PowerShell module. Powershell install-script get-windowsautopilotinfo set-executionpolicy -ex bypass get-windowsautopilotinfo -online Enter Azure credentials to register the device. Accept the permissions request. Wait while the device completes the registration. Go back to Autopilot under the Devices section and verify that the device has been successfully registered. Restart the device, which will then connect to Intune and retrieve the assigned policies. Enter your Azure credentials. Once the device is ready, login, and after a brief wait, any assigned applications will begin to install. That wraps up this quick configuration guide for Intune Autopilot. Links: https://learn.microsoft.com/en-us/autopilot/enrollment-autopilot

  • Deploying Windows Domains as an EC2 Instance with PowerShell - Part 1

    Welcome back! In this blog, I'll demonstrate how you can leverage PowerShell to automate the entire setup of a Windows domain environment on AWS services, from creating the VPC to configuring the EC2 encrypted volumes. Before we start, deploying this will incur AWS costs, the instance type is t3.med ium and the volume is set to $ebsVolType = "io1" and $ebsIops = 1000 This is Part 1 of a 2 parter, and it will focus on setting up the scripting environment and meeting the prerequisites. The ultimate goal is to deploy a public-facing Remote Desktop Server (RDS) and a private Domain Controller (DC) by PowerShell. The Remote Desktop Server will serve as a jump box, providing remote access to the network, while the Domain Controller will be securely tucked away in a private subnet, only accessible through the RDS. Prerequisites There are a few prerequisites before to deploy EC2 instances from Powershell: PowerShell version 7 or Visual Code Studio is required An AWS Account and its corresponding Access ID and Secret Key. The AWS account requires the AdministratorAccess' role or delegated permissions. A basic understanding of both AWS and Windows Domains. The default password for the EC2 Instances is 'ChangeMe1234'.               Previous post on automating Domain and OU creation Before diving into this blog, I highly recommend checking out the previous blogs where I used PowerShell to deploy a domain and create an Organizational Unit (OU) structure. The script used for this AWS blog is a slightly customized version of the Domain script below and as such doesn't require downloading. The description https://www.tenaka.net/post/deploy-domain-with-powershell-and-json-part-1 The Original Domain script https://github.com/Tenaka/Active-Directory-Automated-Deployment-and-Delegation Install Visual Code Studio or PowerShell  I recommend installing either PowerShell 7 (PS7) or Visual Studio Code (VSC), along with the latest .NET SDK. .NET SDKs for Visual Studio https://dotnet.microsoft.com/en-us/download/visual-studio-sdks Download Visual Studio Code https://code.visualstudio.com/download Installing PowerShell on Windows https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell AWS Account and permissions\Access ID From within the AWS console, navigate to IAM and create a service account specifically for executing scripts to create the required AWS services. Ensure this service account has the necessary permissions by adding the following policies and the two custom policies. AmazonEC2FullAccess, AmazonS3FullAccess, AWSKeyManagementServicePowerUser, AmazonSSMReadOnlyAccess, AWSKeyManagementServicePowerUser, IAMFullAccess, AmazonSSMManagedInstanceCore KMS Policy to grant enabling EC2 encrypted volumes, this policy requires further tweaking as it's far too encompassing. { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:GenerateRandom", "kms:ListRetirableGrants", "kms:CreateCustomKeyStore", "kms:DescribeCustomKeyStores", "kms:ListKeys", "kms:DeleteCustomKeyStore", "kms:UpdateCustomKeyStore", "kms:Encrypt", "kms:ListAliases", "kms:GenerateDataKey", "kms:DisconnectCustomKeyStore", "kms:CreateKey", "kms:DescribeKey", "kms:ConnectCustomKeyStore", "kms:CreateGrant" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "kms:*", "Resource": "*" } ] } Additionally, Session Manager rights are needed. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:SendCommand", "ssmmessages:CreateDataChannel", "ssmmessages:OpenDataChannel", "ssmmessages:OpenControlChannel", "ssmmessages:CreateControlChannel" ], "Resource": "*" } ] } If nothing else works, consider adding the 'AdministratorAccess' policy to the service account. Create Access Key Create an Access Key by navigating to the Security tab of the service account and creating a 'Command Line Interface' (CLI) use case. Record the Access Key and Secret Access Key. Download this script... After you've familiarized yourself with the above concepts covered in our previous blogs and created the AWS account with the correct rights, download the PowerShell DeployVPCwithDomain.ps1 script from the link below. https://github.com/Tenaka/AWS-PowerShell/blob/main/DeployVPCwithDomain.ps1 This script is designed to automate the setup of EC2 instances, including a public-facing Remote Desktop Server and a secure, private domain controller. Pick your Scripting Engine I'll be using an elevated Visual Studio Code (VSC) session, all testing has been completed with VSC. While PowerShell version 7 should work, it hasn’t been extensively tested. Variables that need your attention Open the DeployVPCwithDomain.ps1 script in Visual Studio Code (VSC), but hold off on executing it. There are sections you might want to modify first. Update the Region, the default is 'us-east-1' $region1 = "us-east-1" Set-defaultAWSRegion -Region $region1 Update the second and third octets of the CIDR block, as these will form the foundation for your VPC. 10.1.250.0/24 is for a future iteration where Transit Gateways are deployed for additional AD Sites. For now, 10.1.250.0/24 is free to use. $cidr = "10.1.1" # Dont use "10.1.250.0/24" $cidrFull = "$($cidr).0/24" During the execution of DeployVPCwithDomain.ps1, an additional Active Directory script is downloaded from GitHub. This script is used for the configuration of the Domain Controller. $domainZip = "https://github.com/Tenaka/AWS-PowerShell/raw/main/AD-AWS.zip" Invoke-WebRequest -Uri $domainZip -OutFile "$($pwdPath)\AD-AWS.zip" -errorAction Stop DeployVPCwithDomain.ps1, will pause at this point to allow updates to dcPromo.json contained within AD-AWS.zip , this is so the default password of ChangeMe1234 can be changed. If you decide to change the default password, be sure to update it in the UserData sections for both the private and public EC2 instances as well. Set-LocalUser -Name "administrator" -Password (ConvertTo-SecureString -AsPlainText ChangeMe1234 -Force) That's it for now... That's it for this blog, we're all prepped for executing the script! Make sure to come back for Part 2, where I dive into the specifics of what the script creates in AWS. We'll also explore how the script sets up a fully functional Active Directory environment, complete with a domain controller and remote access configurations. Stay tuned!

  • Deploying Windows Domains as an EC2 Instance with PowerShell - Part 2

    Welcome to Part 2! Let's take a deep dive into the specifics of what the DeployVPCwithDomain.ps1 script creates in AWS. Here's a quick recap, a public-facing Remote Desktop Server (RDS) and a private Domain Controller (DC) will be deployed into AWS with all the required AWS infrastructure and services using PowerShell. If you haven't read Part 1, I strongly suggest you do and ensure all the prerequisites are fulfilled, otherwise, it's likely to get messy. To reiterate, deploying this will incur AWS costs, the instance type is t3.med ium and the volume is set to $ebsVolType = "io1" and $ebsIops = 1000 Prerequisites PowerShell version 7 or Visual Code Studio is required An AWS Account and its corresponding Access ID and Secret Key. The AWS account requires the AdministratorAccess' role or delegated permissions. A basic understanding of both AWS and Windows Domains This blog will focus on the execution of the script and the provisioning of the AWS services, including the configuration of the VPC, subnets, and security groups and the deployment of EC2 instances. You’ll also see how the script sets up a fully functional Active Directory environment, complete with a domain controller, OU, Delegation and GPO configuration. Let's Get Started! Let's begin by loading DeployVPCwithDomain.ps1 in Visual Studio Code with elevated rights. I normally 'Ctrl + A' and then press F8 to execute the script, equally F5 works. The script starts by installing the necessary AWS PowerShell modules from PowerShell Gallery. Loading the modules can be problematic. If any of the modules fail, the script should catch the error. I suggest closing VSC, deleting the modules from "C:\Users\%username%\Documents\PowerShell\Modules\", and then restart the script from VCS. Access Key and Secret Access Key Enter both the Access Key and Secret Key created for the service account. Regions The script sets the default AWS region using `Set-defaultAWSRegion -Region $region1`, and this region is also hardcoded in the userdata script for both S3 and EC2 instances. $region1 = "us-east-1" #this is hardcoded in the ec2 userdata script Set-defaultAWSRegion -Region $region1 VPC The VPC is configured with the following CIDR block: `$cidr = "10.1.1"` and `$cidrFull = "$($cidr).0/24"`. This CIDR block specifies the VPC's address range, providing 254 usable IP addresses. $cidr = "10.1.1" $cidrFull = "$($cidr).0/24" $newVPC = New-EC2vpc -CidrBlock "$cidrFull" $vpcID = $newVPC.VpcId Subnets Two subnets, each with 30 usable addresses will be created from the VPC: one for public access and one for private use. $Ec2subnetPub = New-EC2Subnet -CidrBlock "$($cidr).0/27" -VpcId $vpcID $Ec2subnetPriv = new-EC2Subnet -CidrBlock "$($cidr).32/27" -VpcId $vpcID Internet Gateway An Internet Gateway enables communication between your VPC and the Internet by acting as a bridge, allowing instances within your VPC to send and receive traffic from the Internet. $Ec2InternetGateway = New-EC2InternetGateway $InterGatewayID = $Ec2InternetGateway.InternetGatewayId Add-EC2InternetGateway -InternetGatewayId $InterGatewayID -VpcId $vpcID Public and Private Route Tables To enable internet access for your VPC's public subnet, you'll need to create a route table and configure it to direct traffic to the Internet Gateway. $Ec2RouteTablePub = New-EC2RouteTable -VpcId $vpcID New-EC2Route -RouteTableId $Ec2RouteTablePub.RouteTableId -DestinationCidrBlock "0.0.0.0/0" -GatewayId $InterGatewayID Register-EC2RouteTable -RouteTableId $Ec2RouteTablePubID -SubnetId $SubPubID Public IP `Invoke-WebRequest`, fetches your public IP address by querying ` ifconfig.me/ip` . If the request fails or returns an empty value, it defaults to "10.10.10.10". $whatsMyIP = (Invoke-WebRequest ifconfig.me/ip).Content.Trim() if ([string]::IsNullOrWhiteSpace($whatsMyIP) -eq $true){$whatsMyIP = "10.10.10.10"} If the Jump box becomes inaccessible and unless your public IP is static, it's likely to change, making it necessary to update the public security group. Security Groups This script creates 2 security groups within a specified VPC. The PublicSubnet security group to manages traffic rules for public subnet instances. $SecurityGroupPub = New-EC2SecurityGroup -Description "Public Security Group" -GroupName "PublicSubnet" -VpcId $vpcID -Force -errorAction Stop The script defines inbound and outbound rules for a security group. #Inbound Rules $InTCPWhatmyIP3389 = @{IpProtocol="tcp"; FromPort="3389"; ToPort="3389"; IpRanges="$($whatsMyIP)/32"} #Outbound Rules $EgAllCidr = @{IpProtocol="-1"; FromPort="-1"; ToPort="-1"; IpRanges=$cidrFull} `Grant-EC2SecurityGroupIngress applies inbound rules to the defined security group. Grant-EC2SecurityGroupIngress -GroupId $SecurityGroupPub -IpPermission @($InTCPWhatmyIP3389) S3 Bucket An S3 bucket is created to host the AD script. $news3Bucket = New-S3Bucket -BucketName "auto-domain-create-$($dateTodayMinutes)" $s3BucketName = $news3Bucket.BucketName $S3BucketARN = "arn:aws:s3:::$($s3BucketName)" $s3Url = "https://$($s3BucketName).s3.amazonaws.com/Domain/" S3 Bucket Access To grant EC2 instance access to the S3 bucket for running the AD script, a new IAM user is created. $s3User = "DomainCtrl-S3-READ" $newIAMS3Read = New-IAMUser -UserName $s3User A new access key for the specified IAM user is generated and written into the UserData allowing the EC2 instance access to securely authenticate and access the S3 bucket. $newIAMAccKey = New-IAMAccessKey -UserName $newIAMS3Read.UserName $iamS3AccessID = $newIAMAccKey.AccessKeyId $iamS3AccessKey = $newIAMAccKey.SecretAccessKey The following IAM Group is created and the IAM user added to the group. $s3Group = 'S3-AWS-DC' New-IAMGroup -GroupName 'S3-AWS-DC' Add-IAMUserToGroup -GroupName $s3Group -UserName $s3User The policy for read access to the S3 bucket is defined. $s3Policy = @' { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:Get*", "s3:List*", "s3:Describe*" ], "Resource": "*" } ] } '@ The IAM policy is created and added to the above group. $iamNewS3ReadPolicy = New-IAMPolicy -PolicyName 'S3-DC-Read' -Description 'Read S3 from DC' -PolicyDocument $s3Policy Register-IAMGroupPolicy -GroupName $s3Group -PolicyArn $iamNewS3ReadPolicy.Arn VPC Endpoint A VPC endpoint, which allows resources within your VPC to privately connect to AWS services without needing an internet gateway is created to allow the Private EC2 instance to access the S3 Bucket. $newEnpointS3 = New-EC2VpcEndpoint -ServiceName "com.amazonaws.us-east-1.s3" -VpcEndpointType Gateway -VpcId $vpcID -RouteTableId $Ec2RouteTablePubID, $Ec2RouteTablePrivID UserData Scripts EC2 Userdata provides commands automatically to the instance at its initial launch and at first boot. In this case, the PowerShell script changes the default AWS assigned password to 'ChangeMe1234' and renames the EC2 instance to JUMPBOX1 for the Public instance. $RDPScript = ' Set-LocalUser -Name "administrator" -Password (ConvertTo-SecureString -AsPlainText ChangeMe1234 -Force) Rename-Computer -NewName "JUMPBOX1" shutdown /r /t 10 ' The PowerShell script for EC2 instance Userdata is encoded in Base64 because AWS requires userdata to be in this format. $RDPUserData = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($RDPScript)) EC2 Encrypted Volumes EC2 encrypted volumes use AWS Key Management Service (KMS) to automatically encrypt data at rest, in transit between the instance and the volume, and during snapshots. This ensures that all data on the volume is securely protected, with encryption keys managed by AWS. To enable EC2 encrypted volumes, KMS permissions must be granted in IAM, and the following values will be specified. $ebsVolType = "io1" $ebsIops = 2000 $ebsTrue = $true $ebsFalse = $false $ebskmsKeyArn = $newKMSKey.Arn $ebsVolSize = 50 $blockDeviceMapping = New-Object Amazon.EC2.Model.BlockDeviceMapping $blockDeviceMapping.DeviceName = "/dev/sda1" $blockDeviceMapping.Ebs = New-Object Amazon.EC2.Model.EbsBlockDevice $blockDeviceMapping.Ebs.DeleteOnTermination = $enc $blockDeviceMapping.Ebs.Iops = $ebsIops $blockDeviceMapping.Ebs.KmsKeyId = $ebsKmsKeyArn $blockDeviceMapping.Ebs.Encrypted = $ebsTrue $blockDeviceMapping.Ebs.VolumeSize = $ebsVolSize $blockDeviceMapping.Ebs.VolumeType = $ebsVolType Additional help can be found @ https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2/image/block_device_mappings.html EC2 Instance Attributes The New-EC2Instance command and the following configuration parameters are declared to deploy and manage the EC2 instances in AWS. $new2022InstancePub = New-EC2Instance ` -ImageId $gtSrv2022AMI.value ` -MinCount 1 -MaxCount 1 ` -KeyName $newKeyPair.KeyName ` -SecurityGroupId $SecurityGroupPub ` -InstanceType t3.medium ` -SubnetId $SubPubID ` -UserData $RDPUserData ` -BlockDeviceMapping $blockDeviceMapping Accessing the Jump Box The public RDP jump box, accessible only from your public IP, will launch quickly. Retrieve the instance's public IP from the AWS EC2 page, type 'mstsc' at the Run command, and enter the IP. Be sure to wait for the instance to fully initialize before connecting. Enter 'Administrator' and the password 'ChangeMe1234', once logged on, change the password to something more secure. Accessing the Domain Controller The Domain Controller will take some time to deploy, even after it shows as Running on the EC2 page. It undergoes a few reboots and runs scripts to install AD roles, create an OU structure, delegate access, and set up the GPOs. It's a good time to grab a coffee and take a 10-minute break. Once you've finished your coffee, retrieve the Domain Controller's private IP, based on the VPC Private Subnet, from within the AWS EC2 page. Then, from within the Jump box, launch 'mstsc' and enter the Domain Controller's IP. The FQDN for the domain is 'testdom.loc'. Enter 'Administrator' and the password 'ChangeMe1234'. To update the password, open 'Active Directory Users and Computers', find the 'Administrator' account, and reset the password. OU Structure A comprehensive OU structure with GPOs, URA, and Restricted and Nested Groups is deployed in a tiered model. It's too involved to cover here, but a full description can be found @ https://www.tenaka.net/post/deploy-domain-with-powershell-and-json-part-2-ou-delegation JSON The script deployed for AWS is a slightly modified version of the original. Similarly, it is tied to the hostname of the Domain Controller, which is hardcoded as 'AWSDC01' in both the UserData and the JSON file. The other modification involves the IP address. The IP section in the JSON file is ignored, with the Domain Controller being statically assigned the IP provided by AWS's DHCP server. { "FirstDC": { "PDCName":"AWSDC01", "PDCRole":"true", "IPAddress":"10.0.2.69", "Subnet":"255.255.255.0", "DefaultGateway":"10.0.2.1", "CreateDnsDelegation":"false", "DatabasePath":"c:\\Windows\\NTDS", "DomainMode":"WinThreshold", "DomainName":"testdom.loc", "DomainNetbiosName":"TESTDOM", "ForestMode":"WinThreshold", "InstallDns":"true", "LogPath":"c:\\Windows\\NTDS", "NoRebootOnCompletion":"false", "SysvolPath":"c:\\Windows\\SYSVOL", "Force":"true", "DRSM":"Recovery1234", "DomAcct":"Administrator", "DomPwd":"ChangeMe1234", "PromptPw":"false" }, Finally..... These two posts only scratch the surface of deploying Active Directory on AWS with PowerShell. Additional AD Sites, VPN's, AWS Transit Gateways and AD integration into AWS are some of the topics I hope to cover in the future. For now, thank you for taking the time to read my blog; I truly appreciate it. I hope you found it useful.

bottom of page