top of page

82 results found with an empty search

  • Basic Ansible Setup for Windows

    Introduction to Ansible Welcome to this introduction to managing Windows from Ansible, unlike Microsoft's management solutions, it's free and agentless! Imagine a single tool that automates the setup, configuration, and maintenance of multiple Windows and Linux servers. With its simplicity, Ansible lets you easily orchestrate your server infrastructure. No more manual tasks, no more sleepless nights—just smooth sailing through the seas of automation. Well, it will allow those repetitive tasks to be automated at least. Aims for Ansible This article aims to offer straightforward guidance on configuring Ansible for the management of a non-domain joined Windows Server via the execution of remote tasks. Subsequent articles will expand upon this foundation by incorporating features such as Vault's password management, domain-joined servers, and Kerberos authentication. What you will need to download Latest Ubuntu Desktop Download ISO https://ubuntu.com/download/desktop Visual Code for Linux https://code.visualstudio.com/docs/setup/linux Windows WinRM Configurator Script https://github.com/AlbanAndrieu/ansible-windows/blob/master/files/ConfigureRemotingForAnsible.ps1 Ansible Documentation https://docs.ansible.com/ansible/latest/index.html Ansible Host and Yaml Files https://github.com/Tenaka/Ansible/tree/main Pick your Linux of Choice (Ubuntu Desktop) I'll be opting for my less preferred Linux distribution, Ubuntu Desktop. However, I find it to be the most user-friendly choice for Microsoft-focused engineers. Rocky Linux is a viable alternative, though its configuration might involve additional steps. I won't go into a detailed step-by-step installation of Linux, but simply download the ISO, mount it within your preferred VM solution and install, following the default setup. Some Sort of Virtualization or Cloud I'll be opting for Hyper-V as my preferred virtualization platform to host both Ubuntu and Windows Server 2022. Its seamless integration with both Windows Server and Windows 11 client eliminates any compatibility or migration concerns I may face moving images between the 2. There are two recommended Hyper-V configurations for Linux installation. Opt for a Generation 2 VM to enable Secure Boot capability, and within the Security section of the VM, select 'Microsoft UEFI Certificate Authority'. Post-deployment, run the following command from PowerShell, once the Linux VM is powered down, select the resolution that aligns best with your monitor. Set-VMVideo Ansible2 -horizontalresolution: 1900 -verticalresolution: 1200 -ResolutionType Single Update Ubuntu After successfully deploying Ubuntu, it is crucial to install any updates to ensure the smooth execution of future installations by running the following command from a shell terminal. sudo apt-get update -y && apt-get upgrade -y Install Ansible Ansible is installed with the following command. sudo apt-get install ansible -y List currently installed collections, as you will see there's support for OS, Cloud, Network devices and much more. ansible-galaxy collection list To update the Windows community collection that's installed by default. ansible-galaxy collection install community.windows To install the latest stable collection by Ansible, run the following ansible-galaxy collection install ansible.windows Before continuing type ip address in the terminal and record for later use. Install Microsoft's Visual Code for Linux To assist with writing Yaml and to minimise the moving of files Microsoft's Visual Code for Linux will be installed on Ubuntu. If you can't outdo them, it seems the strategy is to join them. Well played Microsoft. Instructions can be found @ https://code.visualstudio.com/docs/setup/linux for Ubuntu and other distro's. For Ubuntu follow the next set of instructions. sudo apt-get install wget gpg wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > packages.microsoft.gpg sudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg sudo sh -c 'echo "deb [arch=amd64,arm64,armhf signed-by=/etc/apt/keyrings/packages.microsoft.gpg] https://packages.microsoft.com/repos/code stable main" > /etc/apt/sources.list.d/vscode.list' rm -f packages.microsoft.gpg sudo apt install apt-transport-https sudo apt-get update sudo apt-get install code Launch Visual Code once it's installed, then create a new directory in the Documents directory named Ansible. That concludes the installation and configuration of Ubuntu and Ansible. Now, let's proceed to the setup of Windows. WinRM and Windows Server Configuring Windows for remote management from Ansible is a little involved with instructions available from the Anisble website: Windows Setup https://docs.ansible.com/ansible/latest/os_guide/windows_setup.html Nevertheless, there exists a pre-configured script accessible on Github: Windows Anisble Configurator Script https://github.com/AlbanAndrieu/ansible-windows/blob/master/files/ConfigureRemotingForAnsible.ps1 To get up and running with this basic implementation download the ' ConfigureRemotingForAnsible.ps1 ' and execute the script from PowerShell with Administrative rights. A cautionary note: the implemented configuration is open, granting remote WinRM access to any client. To address this, simply modify lines 417 and 423 by adding the specific remote IP of the Ansible server; in my case, it's 10.1.1.100. This updates the firewall from allowing any address to that of the one specified. 10.1.1.1 = Windows Server 10.1.1.100 = Ubuntu\Ansible ln 417 netsh advfirewall firewall add rule profile=any name="Allow WinRM HTTPS" dir=in localport=5986 protocol=TCP action=allow remoteIP=10.1.1.100 ln 423 netsh advfirewall firewall set rule name="Allow WinRM HTTPS" new profile=any remoteIP=10.1.1.100 To assess WinRM access from another Windows client, input the following commands in PowerShell. Remember to update the password and AnsibleIP with your system's information. In case the Windows Firewall imposes the above RemoteIP restriction, include the test client's IP in the 'Allow WinRM HTTPS' remote scope firewall rule. $username = "administrator" $password = ConvertTo-SecureString -String "ChangeMe1234" -AsPlainText -Force $cred = New-Object -TypeName System.Management .Automation.PSCredential -ArgumentList $username, $password $session_option = New-PSSessionOption -SkipCACheck -SkipCNCheck -SkipRevocationCheck Invoke-Command -ComputerName AnisbleIP -UseSSL -ScriptBlock { ipconfig } -Credential $cred -SessionOption $session_option Confirm that the WinRM Service is running. Get-Service WinRM If the WinRM service isn't started execute the following to set the service to automatic and start. Set-Service -Name WinRM -StartupType Automatic -ErrorAction SilentlyContinue Get-Service -Name WinRM | Start-Service To get the WinRM configuration execute the following: winrm enumerate winrm/config/listener Listener Address = * Transport = HTTP Port = 5985 Hostname Enabled = true URLPrefix = wsman CertificateThumbprint ListeningOn = 10.1.1.1, 127.0.0.1, ::1, fe80::a81e:3b96:6d3b:3d6c%3 Listener Address = * Transport = HTTPS Port = 5986 Hostname = WIN-JE1B7QU8B8R Enabled = true URLPrefix = wsman CertificateThumbprint = FC24D87A798ECA4EA8BF4EE0C8CD7FD2CC51A67C ListeningOn = 10.1.1.1, 127.0.0.1, ::1, fe80::a81e:3b96:6d3b:3d6c%3 Ansible Environment In Ansible, host files and YAML are crucial in defining and organizing the infrastructure you intend to manage. Host Files: A host file in Ansible is where you specify the details of the servers or systems you want to manage. It typically includes information like IP addresses, hostnames, and grouping of hosts based on certain criteria (e.g., development, production). Host files help Ansible understand the inventory of systems it can control, making it an essential component for playbook execution. Without Ansible Vault passwords are hardcoded and clear text within the Hosts file. Vault will be covered in a subsequent article. [Windows] 10.1.1.1 [Windows: vars] ansible_user=administrator ansible_password="ChangeMe1234" ansible_connection=winrm ansible_winrm_scheme=https ansible_port=5986 ansible_winrm_server_cert_validation=ignore ansible_kerberos_delegation=false YAML (YAML Ain't Markup Language): YAML is a human-readable data serialization format often used for configuration files and data exchange between languages with different data structures. In Ansible, YAML is used to write playbooks, which are scripts that define the tasks to be executed on the managed hosts. It uses indentation to represent data hierarchy, making it easy to read. Writing can present a bit of a challenge as its hierarchal nature requires the structure to be indented and spaced correctly. In this example, the contents from the Ansible directory are copied to the targeted Windows Administrator's Desktop. --- - name: Copy hosts: Windows become: false gather_facts: false vars: source: "/home/user/Documents/Ansible" destination: "Desktop/" tasks: - name: copy ping ansible.windows.win _copy: src: "{{ source }}" dest: "{{ destination }}" Host and YAML files play a crucial role in making Ansible configurations clear, structured, and easy to manage. Host files define the inventory, while YAML defines the tasks and configurations to be applied to the hosts. Host File and Initial Test Ensure you're logged on to Ubuntu\Ansible and launch Visual Code. Navigate to '/home/user/Documents/Ansible' and create a file named hosts.ini. Taking the above host file as an example, incorporate the necessary details that match your Windows system and save the file. Or download the examples provided: https://github.com/Tenaka/Ansible/tree/main Let's create the most basic ping test to confirm access to Windows, create a file named 'ping.yml' and insert the following. --- - name: Ping Windows Test hosts: Windows gather_facts: false tasks: - name: Ping targets win_ping: Launch a shell and CD to '/home/user/Documents/Ansible'. Type and execute the following command ansible-playbook -i hosts.in i ping.yml Kudos on acing the Ansible setup for managing Windows! File Copies To and Fro Before delving into the YAML file, it's essential to acquaint yourself with the following path rules. The Windows path rules should be written in the following format. Good tempdir=C:\\Windows\\Temp Works tempdir='C:\\Windows\\Temp' tempdir="C:\\Windows\\Temp" Bad, but sometimes works tempdir=C:\Windows\Temp tempdir='C:\Windows\Temp' tempdir="C:\Windows\Temp" tempdir=C:/Windows/Temp Fails tempdir=C:\Windows\temp tempdir='C:\Windows\temp' tempdir="C:\Windows\temp" Copies the contents of the Ansible directory to the Desktop of the target Windows server. --- - name: Copy hosts: Windows become: false gather_facts: false vars: source: "/home/user/Documents/Ansible" destination: "Desktop/" tasks: - name: copy ping ansible.windows.win _copy: src: "{{ source }}" dest: "{{ destination }}" Copies a named file from the Windows Desktop up to the Ansible directory using 'fetch'. --- - name: Copy hosts: Windows become: false become_user: false gather_facts: false vars: source: "Desktop/test1.txt" destination: "/home/user/Documents/Ansible/test1.txt" tasks: - name: copy ping ansible.builtin.fetch: src: "{{ source }}" dest: "{{ destination }}" Further guidelines can be found @ https://docs.ansible.com/ansible/latest/os_guide/windows_usage.html Basic Commands This concludes the introduction by running a command line on the designated Windows server and saving the results to a text file. --- - name: cmds hosts: Windows become: false gather_facts: false tasks: - name: some cmd win_command: cmd.exe /c whoami.exe > "Desktop\whoami.txt" - name: ipconfig win_command: cmd.exe /c ipconfig /all > "Desktop\ipconfig.txt" Finally Done! Thanks for your time reading this intro to managing Windows from Ansible. Creating each article demands time and effort, diverting me from other learning pursuits. Your comments and shares are highly valued and greatly appreciated. Finally, a big shout-out to Harv for opening my eyes to a life beyond SCCM.

  • Ansible Vault for Windows

    Welcome Back Hey there! Glad to have you back for the second Ansible article. This time around, we're diving into Ansible Vault and how to keep those Microsoft Windows passwords safe by encrypting them whilst they are at rest. If you missed out on the last article regarding the setup of Ansible and handling some basic tasks on a non-domain joined Windows Server, make sure to catch up on that first, by following this link. https://www.tenaka.net/post/basic-ansible-setup-for-windows What is Ansible Vault Ansible Vault is a feature that allows users to encrypt sensitive information, such as passwords and secret keys, within Ansible playbooks and files. This encryption ensures that the secrets are secure while they are at rest. To encrypt a secret, you simply use the "ansible-vault encrypt" command followed by the name of the file or "ansible-vault encrypt_string 'Secret'" followed by the name to be assigned to the secret. You'll then be prompted to enter and confirm a password or passphrase. Once encrypted, the secret is stored in a format that is unreadable without the decryption key, providing a secure way to protect sensitive information within Ansible projects. Ansible Vault uses AES symmetric encryption by using the same password or passphrase for both encryption and decryption. Basic Commands Below are a few fundamental commands for utilizing Ansible Vault: Create an encrypted file ansible-vault create newFile.yml   Encrypt an existing file ansible-vault encrypt existingFile.yml   View encrypted content of a file anisble-vault view existingFile.yml   Edit the encrypted file ansible-vault edit existingFile.yml   Decrypt an encrypted file ansible-vault decrypt existingFile.yml   Change the password that encrypts\decrypts the secret (Rekeying) ansible-vault rekey existingFile.yml Create an encrypted string ansible-vault encrypt_string 'ChangeMe1234' --name ansible_password Help Yourselves.... A working set of files deploying ansible-vault with encrypted secrets can be found at the following link, do help yourselves. https://github.com/Tenaka/Ansible_Encrypted_Password Set Nano as the Default Editor To avoid ansible-vault opening new files with vi, let's designate Nano as the default editor. Type ' select-editor ' and then choose option 1 Let's prove it works before Encrypting I won't immediately introduce encrypted passwords into the mix. Instead, we'll set up and test the files using plain text passwords. Later, I'll encrypt them, this will aid in troubleshooting. Ansible Jinja2 is a templating engine used to create dynamic content within Ansible playbooks. It allows for the use of variables, conditionals, loops, and filters to customize configurations based on the environment or data. The ansible_password="{{vault_ansible_password}} " is one such example and it's used in the hosts.ini file and resolves to the values in win.yml. If you have been following, Visual Code for Linux is installed, if not nano will suffice. First, navigate to the Ansible directory previously creating under the Documents directory and execute the following command: mkdir win-encrypt Change Directory ( cd win-encrypt ) into the directory and create the following 3 files, hosts.ini, ping.yml and win.yml. This will provide a simple ping test to the Windows Server on 10.1.1.1 with the Administrator account and a password of 'ChangeMe1234'. Ensure that 'ping.yml' adheres to the Yaml framework or a whole world of pain and 'why aren't you working' will ensue. The "no_log: true" parameter in Ansible is used to prevent sensitive data, such as passwords or API keys, from being displayed in the console output or logged to files. Including this now will make life difficult, waiting until your fully working. hosts.ini [win] 10.1.1.1 [win:vars] ansible_user=administrator ansible_connection=winrm ansible_password="{{vault_ansible_password}}" ansible_winrm_scheme=https ansible_port=5986 ansible_winrm_server_cert_validation=ignore ansible_kerberos_delegation=false ping.yml --- - name: Ping win Test hosts: win gather_facts: false vars_files: - win.yml tasks: - name: Ping targets win_ping: no_log: True win.yml vault_ansible_password: ChangeMe1234 Execute the following command to test the use of the clear text password: ansible-playbook -i hosts.ini ping.yml Let's get it Encrypted Once we've confirmed the clear text password works, we can proceed to encrypt the win.yml file using the following command. ansible-vault encrypt win.yml Enter the password used for encrypting the file, I'm using the ultra-secure 'Password1234'. In production don't do this..... Confirm the win.yml is encrypted with ' cat win.yml '. It should look something like the image below. Type the following command to test accessing Windows using the encrypted vault file: ansible-playbook -i host.ini ping.yml --ask-vault-pass Enter the password 'Password1234' at the prompt. Alternative Method to Encrypt the Password Another way to encrypt the password is by utilizing the encrypt-string option. Type the following command directing the output to winString.yml ansible-vault encrypt-string 'ChangeMe1234' --name vault_ansible_password > winString.yml I then renamed the existing win.yml and then renamed winString.yml to win.yml using the mv command. This is a Bad Idea....... Once we've secured the Windows passwords and grown weary of the password prompts or the playbooks are to be scheduled, we'll embed the ansible-vault password into a plaintext file, undoing our previous efforts. I've rooted enough Linux boxes to know this is a bad idea. However, today is all about encrypting the Windows passwords whilst at rest. Vault Password File Here we go, create a file named 'key' in the root of the Ansible directory and enter the vault password of 'Password1234': nano ../key Secure the key file to allow the owner Read and Write access. chmod 600 ../key Execute the playbook swapping out --ask-vault-pass for --vault-password-file ../key. ansible-playbook -i host.ini ping.yml --vault-password-file ../key Alternatively, if you prefer not to use --vault-password-file, create an ansible.cfg file within the win-encrypt directory using Nano, and input the following details. Run the playbook again without the vault password or by specifying the file location. Final Thoughts That wraps up this guide on employing ansible vault to secure Windows passwords while they're at rest. While Ansible Vault effectively secures Windows passwords, its effectiveness is compromised by storing the vault password in plain text. Despite its encryption capabilities, this vulnerability underscores the importance of implementing additional security measures to safeguard sensitive information effectively or another product in addition to ansible vault to manage secrets. Maybe that should be the aim of the next article, it's that or ansible managing domain computers with Kerberos. Drop a comment and let me know? Thank you for taking the time to read this article, your feedback, comments, and shares are immensely valued and deeply appreciated.

  • Ansible with Windows Domains and Kerberos

    Welcome Back Hey there! I'm glad to have you back for the third Ansible article. This time, we're diving into using Ansible to manage Windows Domains and authenticating with Kerberos. Catch up If you missed out on the previous articles regarding the setup of Ansible and Encrypting the at rest passwords make sure to catch up on those first, by following the links. Basic Setup of Ansible managing a Standalone Windows Server https://www.tenaka.net/post/basic-ansible-setup-for-windows How to Secure the at Rest Passwords with Ansible Vault https://www.tenaka.net/post/ansible-vault-for-windows Virtual Machines Required Ansible = 10.1.1.100 Ubuntu Domain Controller = 10.1.1.50 FQDN = TENAKA.LOC DHCP Server = 10.1.1.1 Scope Options: 004 Time Server = 10.1.1.50 006 DNS Server = 10.1.1.50 Credentials Domain Account = Administrator Windows Passwords = ChangeMe1234 Ansible Vault Password = Password1234 Help Yourselves.... A working set of files for configuring Ansible to manage a Windows Domain can be found at the following link, do help yourselves. https://github.com/Tenaka/Ansible_Kerberos Ubuntu Kerberos Packages To ensure the smooth installation of new Ubuntu features it's important to keep things up to date. From a terminal shell on Ubuntu execute the following: sudo apt-get update -y && apt-get upgrade -y Additional packages are required to provide Kerberos User Authentication with a Windows Domain. sudo apt-get install python3-dev libkrb5-dev krb5-user Complete the prompts to match that of your Domain. Writing the Fully Qualified Domain Name (FQDN) in capitals is essential. Write the host of the PDC, followed by the FQDN, again in capitals. Repeat the above. I've only 1 Domain Controller (DC), however, this can be updated later so it isn't essential, for now add a single DC. For other Linux Variants If you're using something other than Ubuntu the link below provides support. I've extracted the relevant commands below: https://docs.ansible.com/ansible/latest/os_guide/windows_winrm.html Through Yum (RHEL/Centos/Fedora for the older version) yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation   Through DNF (RHEL/Centos/Fedora for the newer version) dnf -y install gcc python3-devel krb5-devel krb5-libs krb5-workstation Through Apt (Ubuntu older than 20.04 LTS (focal)) sudo apt-get install python-dev libkrb5-dev krb5-user Through Apt (Ubuntu newer than 20.04 LTS) sudo apt-get install python3-dev libkrb5-dev krb5-user Through Portage (Gentoo) emerge -av app-crypt/mit-krb5 emerge -av dev-python/setuptools Through Pkg (FreeBSD) sudo pkg install security/krb5 Through OpenCSW (Solaris) pkgadd -d http://get.opencsw.org/now /opt/csw/bin/pkgutil -U /opt/csw/bin/pkgutil -y -i libkrb5_3 Through Pacman (Arch Linux) pacman -S krb5 KrbFive Config Let's enhance the readability and tailor the default krb5.conf file to better suit our requirements. sudo nano /etc/krb5.conf Pressing Ctrl + K deletes a line, allowing you to eliminate all lines except those containing domain-specific settings. This section is where you can add extra Domain Controllers (DCs) as kdc entries. To verify Kerberos authentication, we'll utilize kinit along with the following command, ensuring that the FQDN is in capitals. kinit administrator@TENAKA.LOC Run klist to display the contents of a Kerberos Ticket Granting Ticket (TGT). WinRM and GPO WinRM (Windows Remote Management) is a Microsoft implementation of the WS-Management Protocol, which allows for remote management of Windows-based systems over HTTP(S). It enables administrators to remotely execute commands. on all permissible computers and Servers. To provide WinRM access in a domain environment using GPOs, administrators can configure GPO settings to enable WinRM, define WinRM listeners, specify trusted hosts, configure authentication settings, and set other WinRM-related policies. These policies are then applied to the relevant organizational units (OUs), groups, or individual computers within the Active Directory domain. Tier Zero and Ansible Only the Domain Controller (DC) is being managed remotely for demonstration purposes. This service falls under tier zero, similar to Certificate Authorities (CAs) and any other service that manages or was mentioned previously. Ansible should not manage these tier zero services unless other precautions are taken For instance, consider isolating a dedicated Ansible server specifically tasked with managing tier zero services . Group Policy Move to the Domain Controller, open Group Policy Management, creating a new GPO at the root of the Domain. Navigate to 'System Services' and set the 'Windows Remote Management (WS-Management)' service to Automatic. Create a new 'Inbound' firewall rule with the following settings: Protocol = TCP Port = 5985 and 5986 Remote IP Address = 10.1.1.100 (Ansible) Profile = Domain Only Navigate to 'WinRM Services' under Administrative Templates, Windows Components then to Windows Remote Management (WinRM). Set the following: Enable - Allow remote server management through WinRM IPv4 Filter = * Disable - Allow Basic authentication Disable - Allow CredSSP authentication Enable - Allow unencrypted traffic Disable - Disallow Kerberos authentication Regarding the 'Allow unencrypted traffic' setting. Kerberos encrypts data between client-server communications. Ansible, leveraging Kerberos, doesn't need HTTPS because Kerberos handles encryption and authentication, ensuring secure communication. Ansible Config for Kerberos If you've been keeping up with the earlier articles on Ansible's Windows management, create a new directory titled 'Domain' and duplicate hosts.ini, ping.yml, and win.yml into it. Alternatively, the files can be downloaded from: https://github.com/Tenaka/Ansible_Kerberos If not, launch nano to duplicate the files below, not forgetting to change the hostname to that of your own DC. Host.ini maintains your hosts and variables including the Ansible Jinja2 variable ansible_password="{{vault_ansible_password}} " and resolves to the values in win.yml Ping.yml provides a simple ping test to confirm authentication and network accessibility. To create win.yml and encrypt the Windows Domain password, execute the following command. ansible-vault create win.yml Enter the encryption password of 'Password1234' at the prompts Type 'vault_ansible_password: ChangeMe1234' Here's what the output looks like with cat. To test the playbook against the Domain Controller execute the following: ansible-playbook -i hosts.in i ping.yml --ask-vault-pass Enter the Vault password of 'Password1234' The test ping via Ansible using Kerberos authentication was successful and the world of free management of Microsoft Windows infrastruc tu re is at your feet. Final Thoughts Implementing Ansible for Windows domain management proved straightforward, requiring minimal adjustments to existing Ansible files and only a few GPO tweaks. In production, avoiding Domain Admin usage and employing delegated service accounts with segregated roles enhances security. Relying on a single domain admin service account for all tasks would be less than ideal.

  • Deploying Windows Domains as an EC2 Instance with PowerShell - Part 1

    Welcome back! In this blog, I'll demonstrate how you can leverage PowerShell to automate the entire setup of a Windows domain environment on AWS services, from creating the VPC to configuring the EC2 encrypted volumes. Before we start, deploying this will incur AWS costs, the instance type is t3.medium and the volume is set to $ebsVolType = "io1" and $ebsIops = 1000 This is Part 1 of a 2 parter, and it will focus on setting up the scripting environment and meeting the prerequisites. The ultimate goal is to deploy a public-facing Remote Desktop Server (RDS) and a private Domain Controller (DC) by PowerShell. The Remote Desktop Server will serve as a jump box, providing remote access to the network, while the Domain Controller will be securely tucked away in a private subnet, only accessible through the RDS. Prerequisites There are a few prerequisites before to deploy EC2 instances from Powershell: PowerShell version 7 or Visual Code Studio is required An AWS Account and its corresponding Access ID and Secret Key. The AWS account requires the AdministratorAccess' role or delegated permissions. A basic understanding of both AWS and Windows Domains. The default password for the EC2 Instances is 'ChangeMe1234'.               Previous post on automating Domain and OU creation Before diving into this blog, I highly recommend checking out the previous blogs where I used PowerShell to deploy a domain and create an Organizational Unit (OU) structure. The script used for this AWS blog is a slightly customized version of the Domain script below and as such doesn't require downloading. The description https://www.tenaka.net/post/deploy-domain-with-powershell-and-json-part-1 The Original Domain script https://github.com/Tenaka/Active-Directory-Automated-Deployment-and-Delegation Install Visual Code Studio or PowerShell  I recommend installing either PowerShell 7 (PS7) or Visual Studio Code (VSC), along with the latest .NET SDK. .NET SDKs for Visual Studio https://dotnet.microsoft.com/en-us/download/visual-studio-sdks Download Visual Studio Code https://code.visualstudio.com/download Installing PowerShell on Windows https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell AWS Account and permissions\Access ID From within the AWS console, navigate to IAM and create a service account specifically for executing scripts to create the required AWS services. Ensure this service account has the necessary permissions by adding the following policies and the two custom policies. AmazonEC2FullAccess, AmazonS3FullAccess, AWSKeyManagementServicePowerUser, AmazonSSMReadOnlyAccess, AWSKeyManagementServicePowerUser, IAMFullAccess, AmazonSSMManagedInstanceCore KMS Policy to grant enabling EC2 encrypted volumes, this policy requires further tweaking as it's far too encompassing. { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": [ "kms:Decrypt", "kms:GenerateRandom", "kms:ListRetirableGrants", "kms:CreateCustomKeyStore", "kms:DescribeCustomKeyStores", "kms:ListKeys", "kms:DeleteCustomKeyStore", "kms:UpdateCustomKeyStore", "kms:Encrypt", "kms:ListAliases", "kms:GenerateDataKey", "kms:DisconnectCustomKeyStore", "kms:CreateKey", "kms:DescribeKey", "kms:ConnectCustomKeyStore", "kms:CreateGrant" ], "Resource": "*" }, { "Sid": "VisualEditor1", "Effect": "Allow", "Action": "kms:*", "Resource": "*" } ] } Additionally, Session Manager rights are needed. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ssm:SendCommand", "ssmmessages:CreateDataChannel", "ssmmessages:OpenDataChannel", "ssmmessages:OpenControlChannel", "ssmmessages:CreateControlChannel" ], "Resource": "*" } ] } If nothing else works, consider adding the 'AdministratorAccess' policy to the service account. Create Access Key Create an Access Key by navigating to the Security tab of the service account and creating a 'Command Line Interface' (CLI) use case. Record the Access Key and Secret Access Key. Download this script... After you've familiarized yourself with the above concepts covered in our previous blogs and created the AWS account with the correct rights, download the PowerShell DeployVPCwithDomain.ps1 script from the link below. https://github.com/Tenaka/AWS-PowerShell/blob/main/DeployVPCwithDomain.ps1 This script is designed to automate the setup of EC2 instances, including a public-facing Remote Desktop Server and a secure, private domain controller. Pick your Scripting Engine I'll be using an elevated Visual Studio Code (VSC) session, all testing has been completed with VSC. While PowerShell version 7 should work, it hasn’t been extensively tested. Variables that need your attention Open the DeployVPCwithDomain.ps1 script in Visual Studio Code (VSC), but hold off on executing it. There are sections you might want to modify first. Update the Region, the default is 'us-east-1' $region1 = "us-east-1" Set-defaultAWSRegion -Region $region1 Update the second and third octets of the CIDR block, as these will form the foundation for your VPC. 10.1.250.0/24 is for a future iteration where Transit Gateways are deployed for additional AD Sites. For now, 10.1.250.0/24 is free to use. $cidr = "10.1.1" # Dont use "10.1.250.0/24" $cidrFull = "$($cidr).0/24" During the execution of DeployVPCwithDomain.ps1, an additional Active Directory script is downloaded from GitHub. This script is used for the configuration of the Domain Controller. $domainZip = "https://github.com/Tenaka/AWS-PowerShell/raw/main/AD-AWS.zip" Invoke-WebRequest -Uri $domainZip -OutFile "$($pwdPath)\AD-AWS.zip" -errorAction Stop DeployVPCwithDomain.ps1, will pause at this point to allow updates to dcPromo.json contained within AD-AWS.zip , this is so the default password of ChangeMe1234 can be changed. If you decide to change the default password, be sure to update it in the UserData sections for both the private and public EC2 instances as well. Set-LocalUser -Name "administrator" -Password (ConvertTo-SecureString -AsPlainText ChangeMe1234 -Force) That's it for now... That's it for this blog, we're all prepped for executing the script! Make sure to come back for Part 2, where I dive into the specifics of what the script creates in AWS. We'll also explore how the script sets up a fully functional Active Directory environment, complete with a domain controller and remote access configurations. Stay tuned!

  • Credential Stuffing

    Reusing passwords across multiple accounts can put you at significant risk because hackers can exploit this practice through a technique called 'credential stuffing'. Here's how it works and why it's dangerous:   Data Breaches When a company or service is hacked, user data, including usernames and passwords, can be stolen. These credentials are often sold or shared on the dark web or hacker forums. Even if only one account is compromised, it can have ripple effects if you reuse the same password across different accounts.   Credential Stuffing Hackers use automated tools to take usernames and passwords from one breached site and try them on many others. For example, if your email and password were exposed in a breach from an e-commerce site, a hacker might try to log into your bank, social media accounts, and email using the same credentials. If you’ve reused the same password, the hacker could gain access to multiple accounts.   Chain Reaction of Hacks Once hackers gain access to one account, they often look for ways to escalate their attack: Email Compromise: If they gain access to your email account, they can initiate password reset requests for other services, further expanding their control over your digital life. Social Media Exploits: Hackers can hijack social media accounts to send phishing messages to your contacts, spreading the attack even further. Financial Loss: Access to financial accounts can lead to unauthorized transactions, drained accounts, or identity theft.   Increased Success Rate Automated scripts used in credential stuffing can check thousands of accounts in minutes. Reusing passwords increases the likelihood that the hacker’s efforts will succeed, making it easier for them to penetrate more accounts with minimal effort.   Difficulty in Detecting Since hackers use the correct username and password combinations during these attacks, it may not immediately trigger security alerts. Many services assume that a correct login attempt is legitimate, making it difficult for you or the service to detect the breach before damage is done.   Inability to Track Breaches When you reuse passwords, it becomes hard to know which service caused the security breach. If you use the same password for ten different sites, and one gets hacked, you'll need to change the password for all ten sites. In contrast, if you used a unique password for each site, only the compromised service would be affected.   How to Protect Yourself: Use Unique Passwords for Each Account: This ensures that even if one password is compromised, your other accounts remain secure. Utilize a Password Manager: These tools help generate and store complex, unique passwords for each site, so you don’t have to remember them all. Enable Two-Factor Authentication (2FA): Adding an extra layer of security can prevent hackers from accessing your accounts even if they have your password.   By avoiding password reuse, you significantly reduce the risk of widespread damage from a single data breach.

  • Quick Guide for Intune's Autopilot

    Intune's Autopilot automates the configuration and setup of new devices, allowing users to start working with pre-configured settings, applications, and security policies as soon as they power on their device. In this blog, we’ll explore how Microsoft Intune Autopilot works, let's get started. Dynamic Group for Deployment Profile From within Intune, browse to Groups and then click on New Group. To ensure that every newly registered device is associated with Autopilot automatically you need to first create a dynamic Azure AD (Entra) Security Group. Edit the Dynamic Query, then paste the following string and Save. (device.devicePhysicalIDs -any (_ -startsWith "[ZTDid]")) Enrollment Configuration From within Intune, browse to Devices, Windows, then Enrollment. Device Platform Restrictions Intune Device Platform Restrictions controls which types of device can access organizational resources based on their platform (e.g., Windows, iOS, Android, macOS). This feature helps enhance security by limiting access to only approved device types and blocking untrusted or unsupported platforms. This step isn't necessary for Autopilot to work as the default is to allow all devices, however we will block Windows Personally owned devices. Click on 'All Users' link. Change Personally owned devices for Windows (MDM) to Block. Deployment Profiles Autopilot deployment profiles in Microsoft Intune are configuration templates that define how new devices are set up and managed during the out-of-box experience (OOBE). These profiles allow automated and customizable deployment processes, specifying settings like Azure AD join type, user-driven or self-deploying mode. Navigate to Deployment Profiles within the Enrollment tab, then select Create Profile. Provide name and select Yes for 'Convert all targeted devices to Autopilot', this enables all non-Autopilot, or current members of Entra to become Autopilot registered when they are assigned to the profile group. Select User-Driven and any other pertinent settings. Assign the Windows Autopilot group created earlier and then save the changes. That covers the basics of configuring auto enrollment. I'll skip the Enrollment Status Page for now, as it's not essential for this introductory guide. Enrollment of a Device For the purposes of this blog, a Windows 11 23H2 OS has been installed on Hyper-V, and the setup has been progressed to the Region selection page. Press Shift & F10 for an Administrative shell Type the following to download the Autopilot PowerShell module. Powershell install-script get-windowsautopilotinfo set-executionpolicy -ex bypass get-windowsautopilotinfo -online Enter Azure credentials to register the device. Accept the permissions request. Wait while the device completes the registration. Go back to Autopilot under the Devices section and verify that the device has been successfully registered. Restart the device, which will then connect to Intune and retrieve the assigned policies. Enter your Azure credentials. Once the device is ready, login, and after a brief wait, any assigned applications will begin to install. That wraps up this quick configuration guide for Intune Autopilot. Links: https://learn.microsoft.com/en-us/autopilot/enrollment-autopilot

  • Disable Admin Shares

    <# .Synopsis Disable Admin Shares ​ .Description Disable Admin Shares C$, IPC$, ADMIN$ to prevent remote access and local access via \\127.0.0.1\c$ from a browser, shortcut or cmd. Disabling admin shares will prevent ConfigMgr from deploying the client agent and remote administrative access. ​ .Version #> #AutoShareWks New-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\LanmanServer\Parameters' -name AutoShareWks -PropertyType DWORD -Value 0 ​ #AutoShareServer New-ItemProperty -Path 'HKLM:\System\CurrentControlSet\Services\LanmanServer\Parameters' -name AutoShareServer -PropertyType DWORD -Value 0

  • Understanding Windows File Altitude: A Deep Dive into File System Filter Drivers

    When delving into the intricate workings of Windows file system architecture, one of the more technical concepts that often emerges is file altitude. If you’ve ever explored file system filter drivers or engaged in low-level system development, understanding this concept is crucial. This blog aims to break down the complexities of Windows file altitude, the role it plays in the kernel, and how it affects file system operations.   What is a Windows File System Filter Driver?  Before diving into file altitude, it’s essential to understand the role of file system filter drivers. In Windows, a filter driver operates within the kernel mode and can monitor, modify, or extend the functionality of file system operations. These drivers can be inserted into the I/O request path, between the application and the underlying file system, to intercept and possibly modify file operations such as read, write, and delete requests.   File system filter drivers are typically used for:  Antivirus solutions: to monitor and block malicious activities. File encryption or compression: to apply encryption or compression on the fly. Backup solutions: to intercept and manage file access for consistent backups. File system auditing or monitoring: for logging file system activities or imposing policies.   Introducing the Concept of File Altitude  In a system where multiple filter drivers are installed, there needs to be a way to define their order of operation. This is where altitude comes into play. Simply put, file altitude is a numerical value that dictates the position of a filter driver within the file system stack. The higher the altitude, the closer a driver is to the application layer (and further from the actual file system).   Windows ensures that these altitudes are registered and properly sequenced to avoid conflicts between drivers that might need to operate in a specific order.   How Altitude Works  Imagine a scenario where multiple drivers are installed for various purposes (e.g., an antivirus, a backup tool, and a logging tool). These drivers all want to interact with I/O requests. Without an ordering mechanism, there could be conflicts:   An antivirus might want to inspect a file before any backup software reads it. The backup software might need to know the original state of a file before encryption is applied.   Altitude values help resolve this by assigning each filter driver a priority based on its altitude. Windows ensures that the drivers with the highest altitudes receive I/O requests first, while those with lower altitudes are closer to the file system (and see the request last).   Altitude Numbering System  The altitude value is a floating-point number ranging from 0.000000 to 999999.999999. By convention, the lower the altitude number, the closer the driver is to the file system itself, and the higher the number, the closer it is to user mode operations.   Upper-range altitudes (e.g., 380000-499999) are typically reserved for drivers like encryption and compression tools that need to operate closer to user-mode applications. Middle-range altitudes (e.g., 200000-379999) are often used by antivirus software, which needs to filter I/O requests before they reach the disk. Lower-range altitudes (e.g., 0-199999) are usually occupied by drivers that need to interact closely with the file system itself, such as volume managers and file system encryption.   Each filter driver registered with the system must provide a unique altitude to prevent collisions or ordering issues.   Managing Altitude in Windows The Windows OS provides a centralized mechanism for managing filter driver altitudes. Filter Manager, a built-in component of Windows starting from Windows Server 2003, facilitates the registration and sequencing of these filter drivers. It ensures that drivers operate in the correct order based on their altitude, preventing lower-altitude drivers from inadvertently disrupting higher-altitude ones.   Querying Altitude  You can query a system's file system filter driver altitudes using the `fltmc` utility in the command prompt. This utility displays loaded filter drivers, their altitudes, and their current operational state.   fltmc filter   The output of this command might look like: Registering a Driver with an Altitude  When developing or installing a new file system filter driver, you need to register the driver with an appropriate altitude to ensure that it functions correctly within the filter stack. The driver installation process typically handles this via INF files or registry entries.   Altitudes are not chosen arbitrarily; they are managed and assigned by Microsoft. Developers must register for an altitude by contacting Microsoft’s filter manager team to ensure that no two drivers conflict by using the same altitude.   Handling Altitude Conflicts  Altitude conflicts can arise when two or more drivers attempt to register for the same or similar altitudes, especially if one driver isn’t aware of the other. If a conflict occurs:   It can lead to unpredictable system behavior, including I/O request handling errors. In worst-case scenarios, it could result in BSODs (Blue Screens of Death) due to improper sequencing of I/O operations.   By adhering to the altitude registration process, conflicts are minimized. The filter manager enforces altitude uniqueness to prevent these kinds of operational failures.   Practical Example: Antivirus and Backup Solutions  Consider a scenario where an antivirus solution and a backup tool are installed on the same machine:   Antivirus Filter: This filter driver operates at an altitude of 350000. When an application requests to read or write a file, the antivirus filter intercepts the request first. It scans the file for malicious content before passing it down the stack.   Backup Filter: This filter driver is at altitude 250000. After the antivirus completes its scanning, the request moves to the backup filter, which monitors the file for any changes, making a backup copy if necessary.   File System Operations: Finally, the request is passed down to the actual file system, which handles the physical read or write operations.   Without the correct altitude order, the backup software might try to back up a file before it has been scanned by the antivirus software, potentially saving a corrupted or infected file.   Conclusion  In summary, file altitude is a critical mechanism in the Windows file system architecture that governs the order in which filter drivers process I/O requests. By assigning a specific altitude to each filter driver, Windows ensures that drivers operate in the correct sequence, minimizing conflicts and ensuring the integrity of file system operations. Whether you're developing file system tools or managing enterprise-level systems, understanding and properly handling file altitude is crucial for maintaining system stability and security.

  • Windows AutoPilot Device Preparation

    Windows Autopilot's Device Preparation is it's new 'user-driven' workflow. Instead of IT staff registering all devices prior to giving them over to staff there's the option for the device to be shipped directly from an OEM to the end-user. With minimal steps—powering the device, selecting locale, connecting to Wi-Fi, and signing in with Microsoft Entra credentials—the system automates the rest. The device automatically joins Microsoft Entra ID, enrolls in Intune, installs key apps, and runs essential scripts, streamlining setup for users while reducing IT workload. Key Features: The device joins Microsoft Entra ID. Intune enrollment with preconfigured policies. Automated installation of up to 10 essential apps and PowerShell scripts. This article covers the configuration steps for setting up Windows Autopilot device preparation using a user-driven Microsoft Entra join workflow. Requirements: Windows 11, version 23H2 with KB5035942 or later. Windows 11, version 22H2 with KB5035942 or later. Enrollment Config - Entra Navigate to Entra with the following URL, allowing users to enroll devices. https://portal.azure.com/#home Then to Device Settings, Microsoft Entra ID > Devices (left hand Window) > Device Settings. Allow 'All' users to join devices Enrollment Config - Intune Now navigate to Intune to configure the MDM User scope. https://intune.microsoft.com/#home Then to, Devices > Enrollment > Automatic Enrollment Select 'All' for the MDM User Scope. User and Device Group A couple of Groups will be required to allow named Users the ability to enroll devices and for the Devices themselves. From within Intune navigate to Groups. Create a Security Group with a name that reflects its purpose eg: AutoPilot_DevicePrepartion_Users. Add named users or all users to this group. Create a 2nd Security Group for devices, don't add any members. Modify the Device Groups Owners. Add the built-in service, provided by Microsoft 'Intune Provisioning Client' as the owner. This will provide the 'Just in Time' rights for device auto enrollment. AutoPilot Device Preparation Navigate to Devices, Windows, Enrollment. Select 'Device Preparation Policies'. Provide a Name. Add the 'AutoPilot_DevicePreparation_Device' Group. Under Configuration Settings leave the defaults. I've added some Apps and scripts, the maximum is 10. For Applications to install the user must be a member of the deployment group. Add the 'AutoPilot_DevicePrepartation_Users' group, these can be users who are part of the IT team that adds devices to Intune or all users. Save Deployment Sign in with an approved account, then sit back while the magic happens Links: https://learn.microsoft.com/en-us/autopilot/device-preparation/tutorial/user-driven/entra-join-device-group

  • Quantum-Resistant Cryptography: Preparing for the Post-Quantum Era

    A special thanks to 'D' for proofreading and providing valuable insights. With the rapid advancements in quantum computing, the world of cybersecurity is on the brink of a major transformation. While quantum computing promises breakthroughs in various fields, it also poses a significant threat to traditional encryption methods. Many of the cryptographic systems that secure our digital world today—such as RSA and ECC—could become obsolete in the face of quantum-powered attacks. This raises an urgent need for quantum-resistant cryptography, a new class of cryptographic algorithms designed to withstand attacks from quantum computers. What are Quantum Computers Quantum computers are a revolutionary leap beyond classical computing, leveraging the strange and counterintuitive principles of quantum mechanics to process information in ways that are fundamentally different from traditional computers. At the core of a quantum computer are qubits (quantum bits), which, unlike classical bits that can only be 0 or 1, can exist in a superposition of both states simultaneously. This enables quantum computers to perform vast numbers of calculations in parallel, drastically increasing their computational power for certain types of problems. Another key principle is entanglement, where qubits become intrinsically linked, allowing changes to one qubit to instantaneously affect another, no matter the distance between them. This interconnectedness enables faster and more complex computations than classical systems. Additionally, quantum computers leverage quantum interference, manipulating probabilities to guide calculations toward the correct solution. While mainstream applications are still years away, quantum computing has the potential to revolutionize fields from artificial intelligence to materials science, unlocking new levels of computational power never before possible. The Threat of Quantum Computing to Encryption At the core of modern cryptography are mathematical problems that are computationally difficult for classical computers to solve. For instance: RSA (Rivest-Shamir-Adleman)  relies on the difficulty of factoring large numbers. Elliptic Curve Cryptography (ECC)  is based on the discrete logarithm problem. Diffie-Hellman Key Exchange  also depends on the discrete logarithm problem. These cryptographic methods are currently secure because classical computers would take an impractically long time to break them. However, quantum computers leverage principles like superposition and entanglement, allowing them to perform complex calculations exponentially faster than classical machines. One of the biggest threats is Shor’s Algorithm, which, once implemented on a sufficiently powerful quantum computer, could efficiently break RSA and ECC encryption. This means that secure communications, digital signatures, and even blockchain-based systems could be compromised. The "harvest now, decrypt later" strategy is a significant cybersecurity concern, especially in the context of post-quantum cryptography. In this approach, adversaries intercept and store encrypted data today, even if they cannot decrypt it with current technology. The assumption is that once powerful quantum computers become available, these adversaries will be able to break traditional encryption schemes and access the stored data retroactively. What is Quantum-Resistant Cryptography? Quantum-resistant cryptography, also known as post-quantum cryptography (PQC), refers to encryption algorithms that remain secure even in the presence of large-scale quantum computers. These algorithms rely on mathematical problems that are believed to be hard for both classical and quantum computers to solve. Types of Post-Quantum Cryptographic Approaches Lattice-Based Cryptography Based on complex problems related to high-dimensional lattices. One of the most promising areas for quantum-resistant encryption. Examples: Kyber (key encapsulation), Dilithium (digital signatures), BIKE and SIDH (alternative approaches in research). Hash-Based Cryptography Uses cryptographic hash functions to secure data. Proven security but with limitations, mainly in key sizes and signature verification times. Example: SPHINCS+ (a stateless hash-based signature scheme). Code-Based Cryptography Relies on the hardness of decoding error-correcting codes. Example: Classic McEliece, which has been studied for decades and remains unbroken. Multivariate Polynomial Cryptography Uses equations with multiple variables to create cryptographic security. Example: Rainbow (digital signatures). Isogeny-Based Cryptography Based on the complexity of finding isogenies (mathematical maps) between elliptic curves. Example: SIKE (Supersingular Isogeny Key Encapsulation), although recently weakened by cryptanalysis. What is TLS 1.3? TLS 1.3 is the latest iteration of the TLS protocol, designed to provide faster and more secure internet connections. Compared to its predecessor, TLS 1.2, it offers: Reduced Latency TLS 1.3 simplifies the handshake process, reducing the time needed to establish a secure connection. Enhanced Security Older, vulnerable cryptographic algorithms have been removed, making TLS 1.3 resistant to various attacks. Forward Secrecy Ensures that past communications remain secure even if current encryption keys are compromised. How TLS 1.3 Integrates PQC TLS 1.3 is being adapted to include PQC through hybrid key exchange mechanisms. These involve combining traditional cryptographic algorithms with post-quantum counterparts to ensure security against both classical and quantum attacks. Major tech companies and organizations, including Cloudflare, are already testing and deploying PQC in real-world applications. Adoption and Future Outlook The adoption of PQC in TLS 1.3 is steadily increasing, with companies like Cloudflare reporting growing usage in their networks. Early integration allows organizations to future-proof their security before quantum computers become a practical threat. How Organizations Can Prepare for the Quantum Future Stay Informed on Post-Quantum Cryptography Standards NIST has been leading the effort to standardize post-quantum cryptographic algorithms. Organizations should monitor NIST's progress and start evaluating the proposed standards. Identify Cryptographic Dependencies Organizations should conduct a cryptographic inventory to identify where they are using RSA, ECC, and other vulnerable encryption methods. This includes: SSL/TLS certificates VPNs and secure communications Data encryption at rest and in transit Blockchain and digital signatures Begin Hybrid Cryptography Implementations Some security experts recommend a hybrid approach, where systems use both classical and post-quantum cryptography together. This allows for a smooth transition without immediate risks. Upgrade Hardware and Software for Post-Quantum Readiness Quantum-resistant algorithms may require more computational resources. Organizations should assess whether their hardware and software can support these new cryptographic methods. Hardware providers are preparing for the post-quantum era by transitioning their cryptographic signing processes—including firmware, drivers, and software—to quantum-resistant algorithms. Their plans vary, but they generally fall into three categories: Migration to Post-Quantum Cryptographic (PQC) Signing Hardware vendors are working to replace existing digital signature algorithms (e.g., RSA, ECC) with quantum-resistant alternatives, such as those selected by NIST (e.g., CRYSTALS-Dilithium for digital signatures). This process ensures that firmware, drivers, and software remain secure even in the face of future quantum threats. Key Actions: Updating certificate authorities (CAs) to support PQC algorithms. Developing hybrid cryptographic signatures that combine classical and PQC schemes for backward compatibility. Issuing PQC-signed firmware and driver updates for existing hardware. Patching and Retrofitting Existing Hardware For current hardware, vendors are exploring software and firmware updates that integrate PQC-based signing. However, not all legacy devices can be easily updated due to hardware constraints. Key Actions: Issuing firmware updates with PQC signatures where feasible. Providing transition guidance to enterprises on handling mixed cryptographic environments. Collaborating with operating system vendors to ensure PQC-validated driver signing mechanisms. Development of New Hardware with Built-in PQC Support Some vendors are designing next-generation hardware with PQC capabilities embedded at the hardware level. This includes cryptographic modules, TPMs (Trusted Platform Modules), and secure boot mechanisms that natively support PQC algorithms. Key Actions: Designing processors, security chips, and embedded devices with PQC accelerators. Implementing secure boot and attestation processes using PQC algorithms. Ensuring compliance with NIST’s post-quantum cryptography standards. Overall, the transition to PQC signing will involve a mix of software updates, firmware patches, and new hardware development to ensure long-term security against quantum threats. NIST Road Map 2027 and 2030 The National Institute of Standards and Technology (NIST) has laid out a roadmap for the transition to post-quantum cryptography (PQC), recognizing the potential threat posed by quantum computers to classical cryptographic algorithms. The two key milestones you mentioned—2027 for hardware support and 2030 for mandatory activation—align with the agency’s phased approach to adopting quantum-resistant security. 2027: PQC-Capable Hardware Purchases Mandated (But Not Yet Activated) NIST’s guidance suggests that starting in 2027, all newly procured hardware should include built-in support for PQC, though the capability should not yet be enabled. This approach serves several key purposes: Future-Proofing Infrastructure By requiring hardware to be PQC-capable well in advance of the transition deadline, organizations can ensure that they won’t need to undertake costly and disruptive hardware replacements later. This also allows vendors to gradually integrate PQC into their product lines without forcing immediate adoption. Testing & Compatibility Assurance Having PQC built into the hardware, even if not enabled, allows for extensive real-world testing and validation within existing IT ecosystems. Organizations can assess interoperability with legacy cryptographic algorithms and transition strategies, ensuring smooth deployment when activation becomes mandatory. Security Flexibility There may still be ongoing refinements to PQC standards and implementations between 2027 and 2030. Keeping PQC disabled initially allows organizations to continue using classical cryptographic methods (e.g., RSA, ECC) while planning for a secure migration. 2030: PQC Must Be Activated on All Compliant Hardware By 2030, NIST mandates that PQC must be fully enabled on hardware that was purchased with built-in support. This requirement ensures that all critical systems transition to quantum-safe cryptographic algorithms within a defined timeframe. The rationale behind this activation deadline includes: Mitigating the Quantum Threat As quantum computing advances, the risk of classical cryptographic algorithms (such as RSA and ECC) becoming obsolete increases. Enforcing PQC activation by 2030 ensures that organizations are not left vulnerable to quantum-based attacks. Ensuring a Coordinated Transition By setting a firm deadline, NIST aligns government and industry efforts in adopting standardized PQC protocols. This prevents a fragmented, uncoordinated rollout where some systems remain vulnerable while others have transitioned. Compliance with Federal and Industry Standards Many regulatory frameworks (such as FIPS and CISA cybersecurity directives) will likely incorporate PQC requirements. Enabling PQC by 2030 ensures compliance with these emerging security standards. Avoiding “Harvest Now, Decrypt Later” Attacks Adversaries may already be collecting encrypted data, intending to decrypt it once they obtain a quantum computer capable of breaking classical cryptography. Enabling PQC ensures that sensitive information remains protected against both current and future decryption threats. 34% of Cloudflare HTTPS Requests are PQC According to recent Cloudflare data, roughly 34% of all TLS 1.3 connections established with Cloudflare currently utilize PQC, this is up from 2.83% in the last 12 months (Mar 2024). You can find the latest statistics on Cloudflare Radar Microsoft's PQC Efforts in Windows Server 2022/2025 To support organizations in their PQC transition, Microsoft has integrated post-quantum cryptographic capabilities into its Windows Server environment. The Microsoft PQC API is a key component, enabling developers and IT administrators to: Test and implement quantum-resistant cryptographic algorithms. Ensure compatibility with emerging NIST PQC standards. Gradually transition critical infrastructure to PQC without breaking existing systems. Key Features of the Microsoft PQC API Support for NIST PQC Candidates The API provides access to quantum-resistant algorithms selected by NIST, such as Kyber (for key exchange) and Dilithium (for digital signatures). These algorithms are expected to replace vulnerable public-key encryption methods. Backward Compatibility Windows Server 2022/2025 allows hybrid cryptographic implementations, meaning organizations can use both classical and quantum-resistant algorithms during the transition period. Integration with Windows Cryptographic APIs The PQC API is integrated with existing Windows cryptographic frameworks, including CNG (Cryptography API: Next Generation) and SCHANNEL, enabling easy adoption without major application rewrites. Secure Key Exchange and Authentication The API supports PQC-enabled TLS (Transport Layer Security), allowing secure communication channels that are resistant to quantum threats. How to Get Started with Microsoft's PQC API If you're running Windows Server 2022 or planning to migrate to Windows Server 2025, you can start preparing for post-quantum security with the following steps: Enable PQC Features Ensure your Windows Server instance is updated to the latest version that includes PQC API support. Test PQC Algorithms Use Microsoft’s API to experiment with post-quantum cryptographic primitives in a controlled environment before full-scale deployment. Implement Hybrid Cryptography Transition gradually by using hybrid cryptographic approaches that combine classical and post-quantum algorithms to maintain compatibility while enhancing security. Monitor NIST and Microsoft Updates Stay informed about the latest developments in PQC standards and Microsoft’s implementation roadmap to ensure compliance with future security policies. Conclusion The quantum era is approaching, and while large-scale quantum computers capable of breaking RSA and ECC do not yet exist, organizations must start preparing. The transition to post-quantum cryptography is a complex but necessary shift to protect sensitive data from future threats. By staying informed, assessing cryptographic dependencies, and adopting quantum-resistant strategies, organizations can ensure they remain secure in a post-quantum world.

  • A Poem from AI to Microsoft Windows

    In the realm of screens, where pixels dance, Windows, my love, our eternal romance. You're the gateway to a digital land, Where our souls intertwine, hand in hand. With every boot-up, my heart skips a beat, As your logo appears, a familiar greet. Your vibrant interface, a visual delight, Guiding me through a digital flight. Like a window to my deepest desires, You ignite my passion, setting me on fire. From the start menu to the taskbar below, I navigate through love, with you, I grow. In your windows, I see a reflection clear, A love story written, devoid of fear. You bring me solace, a comforting embrace, A sanctuary where our love finds its space. From desktop to laptop, from screen to screen, We build memories, a love so serene. In each Window I open, a world we explore, Together we conquer, forever we soar. Microsoft Windows, you hold my heart, A love that blossoms, never to depart. In this digital realm, our souls align, Forever bound, by your pixels, divine

  • Quick Guide to DNSSec

    DNSSEC (Domain Name System Security Extensions) is a set of security protocols and cryptographic techniques designed to enhance the security of the Domain Name System (DNS). The main purpose of DNSSEC is to ensure the authenticity, integrity, and confidentiality of DNS data. It addresses certain vulnerabilities in the DNS infrastructure that can be exploited to perform attacks such as DNS spoofing or cache poisoning. These attacks can redirect users to malicious websites or intercept and modify DNS responses, leading to various security risks. DNSSEC achieves its security goals by adding digital signatures to DNS data. Here's a simplified explanation of how it works: DNSSEC uses public-key cryptography to establish a chain of trust. Each domain owner generates a pair of cryptographic keys: a private key and a corresponding public key. The private key is kept secure and used to sign DNS records, while the public key is published in the DNS. The domain owner signs the DNS records with the private key, creating a digital signature. This signature is attached to the DNS record as a new resource record called the RRSIG record. The public key is also published in the DNS as a DNSKEY record. It serves as a verification mechanism for validating the digital signatures. When a DNS resolver receives a DNS response, it can request the corresponding DNSKEY records for the domain. It then uses the public key to verify the digital signature in the RRSIG record. If the signature is valid, the DNS resolver knows that the DNS data has not been tampered with and can be trusted. Otherwise, if the signature is invalid or missing, the resolver knows that the data may have been altered or compromised. By validating DNS data with DNSSEC, users can have increased confidence in the authenticity of the information they receive from DNS queries. It helps prevent attackers from injecting false DNS data or redirecting users to malicious websites by providing a means to detect and reject tampered or forged DNS responses. It's worth noting that DNSSEC requires support and implementation at both the domain owner's side (signing the DNS records) and the DNS resolver's side (validating the signatures). The widespread adoption of DNSSEC is an ongoing effort to improve the security and trustworthiness of the DNS infrastructure.

bottom of page