67 items found for ""
- PowerShell's Custom Runtime for AWS Lambda's - Importing Modules
Welcome to the second part of the installation and configuration process for the AWS Custom Runtime for PowerShell Recap In the first part, we covered the installation process of AWS's Custom Runtime for PowerShell, which involved deploying Windows Subsystem for Linux (WSL) and initializing the Runtime and deploying the Demo Lambda Function. Here's the link. with instructions on how to instal WSL and deploy the Custom Runtime. https://www.tenaka.net/post/wsl2-ps-custom-runtime-deployment What's in Part 2 The first part left on a bit of a cliffhanger, functionally, the Custom Runtime for PowerShell worked, but without additional modules, there's very little that could be accomplished. The subsequent steps entail the creation of Lambda layers that incorporate additional modules, which will be utilized in Lambda Functions to finalize the end-to-end deployment process. Copy and Paste Upon completing this process, the objective is to successfully deploy a Lambda Function equipped with a layer containing both the AWS.Tools.Common and AWS.Tools.EC2 PowerShell modules. This will enable the ability to start and stop an EC2 instance within the AWS environment. Continuing where we previously left off, we are going to utilise the work that has already been completed by AWS, by amending an existing example. Before we start, only 5 layers can be added to a Lambda Function, but a layer can contain multiple modules. Change the directory into the AWSToolsforPowerShell directory. cd /Downloads/aws-sam/powershell-modules/AWSToolsforPowerShell Copy the existing S3EventBridge directory. cp AWS.Tools.S3EventBridge AWS.Tools.EC2 -r cd AWS.Tools.EC2 Amendments The 3 files that will require amending to successfully publish additional modules as layers are: build-AWSToolsLayer.ps1 template.yml /buildlayer/make The process is straightforward, find and replace all references to the current module functionality with the new module functionality. Although updating build-AWSToolsLayer.ps1 is not strictly essential since we'll be relying on the Make command, taking a few seconds to do so ensures consistency among all the files involved. nano build-AWSToolsLayer.ps1 Ctrl + o to save (output the file) Ctrl _ x to exit nano Add additional lines for modules that are to be extracted from aws.tools.zip. Note: It is crucial to ensure the correct ordering of modules, with AWS.Tools.Common listed before the module for EC2. The EC2 module relies on the functionality provided by AWS.Tools.Common. In the original S3EventBridge version of template.yml AWSTools.EC2 read S3EventBridge. Ensure !Ref values are updated from AWSToolsS3EventBridgeLayer to AWSToolsEC2Layer, this value is passed between files and needs to be consistent. Save and exit template.yml. cd buildlayer nano Make The first line references !Ref and it must be consistent with the value set in template.yml. Modify the unzip commands to accommodate any supplementary modules. Save and exit Make. Build and Deploy After each amendment to the configuration files, the content must be redeployed in order to reflect the changes made: sam build To publish to AWS run the following: sam deploy -g Layers and a Lambda Login to AWS Lambda and confirm the new layer has been created. Let us bring the entire Custom Runtime endeavour to fruition, by creating a new Lambda Function designed to initiate the start of an EC2 Instance, by clicking Create Function. Name the function and select the Amazon Linux 2 Runtime. Ensure the Architecture is set to x86_64. 'Create a new role with basic Lambda permissions' is also selected. Create Function Within the Function Overview click on Layers, then Add Layers. Select Custom Layers and then add in order: PwshRuntimeLayer AWSToolsEC2Layer PwshRuntimeLayer is listed first, followed by any modules. Click Configuration and Edit Update memory to 512Mb and timeout to 1 minute. Before saving the configuration updates, open the IAM link in another browser tab to grant the function the additional permissions required for execution. Within IAM, add AmazonEC2FullAccess and AWSLambdaExecute to the Role. Navigate back to Lambda and then select Code. Update the Runtime Settings Handler information to reflect the name of the PowerShell script followed by "::handler". In this example, the handler will be "Start-Ec2.ps1::handler" Navigate back to Code and delete all the default files. Right-click on the folder and New File, rename to "Start-Ec2.ps1". Copy and paste the provided script, and make sure to modify the Reservation ID with the ID of your own EC2 instance. Deploy the changes. Click Test Complete the Configure Test Event by providing an Event Name. Navigate to the Test tag and click Test to execute the Lambda Function. I'm hoping this guide provides a starting point for further modules and functionality, especially those that come from a native Microsoft background. I wish to thank everyone for their time and any feedback would be gratefully received.
- A Poem from AI to Microsoft Windows
In the realm of screens, where pixels dance, Windows, my love, our eternal romance. You're the gateway to a digital land, Where our souls intertwine, hand in hand. With every boot-up, my heart skips a beat, As your logo appears, a familiar greet. Your vibrant interface, a visual delight, Guiding me through a digital flight. Like a window to my deepest desires, You ignite my passion, setting me on fire. From the start menu to the taskbar below, I navigate through love, with you, I grow. In your windows, I see a reflection clear, A love story written, devoid of fear. You bring me solace, a comforting embrace, A sanctuary where our love finds its space. From desktop to laptop, from screen to screen, We build memories, a love so serene. In each Window I open, a world we explore, Together we conquer, forever we soar. Microsoft Windows, you hold my heart, A love that blossoms, never to depart. In this digital realm, our souls align, Forever bound, by your pixels, divine
- Pi-hole Ad and Malware Blocker Setup
Introduction Pi-hole provides numerous benefits as a network-wide ad blocker and privacy tool. It eliminates annoying ads and pop-ups across all devices, resulting in a cleaner and more streamlined browsing experience. By blocking ad-related domains, Pi-hole accelerates webpage loading times, saving bandwidth and reducing data consumption. It also enhances online security by blocking access to malicious domains and preventing tracking and data collection by advertisers. Overall, Pi-hole offers an effective and convenient solution to improve browsing speed, reduce data usage, bolster privacy, and enhance online security and this is a guide on how to setup a pi-hole. EtherApe Using EtherApe, I'm going to demonstrate the effectiveness of Pi-hole on a well established bastion of truth and a British institution (cough) and particularly high in Adverts, the Dailymail. Before the Pi-hole is enabled there's numerous and sustained.... Video pop-ups Header Ads Ads on both sides of the news articles The network noise is... outrageous, both in the number of connections to Ad-sites and the amount of traffic, represented by the heat map. After the Pi-hole is enabled: Video pop-ups - gone Header Ad - gone Ads on both sides of the news articles - gone EtherApe is showing a much calmer heat map with farless outbound connections. Equipment The following equipment is required, mines from Amazon. Raspberry Pi 4 Model B - £97.99 SanDisk 128Gb Extreme microSDXC - $16.99 Raspberry Pi 4 USB-C Power Supply - £11.99 Total £126.17 Raspberry Pi Installation Raspberry Pi makes downloading and burning the image to SSD easy, needing only the Imager executable. Download and install from https://www.raspberrypi.com/software, the wizard will guide you through the burning process. Run the Imager and select Operating System. Select 'Raspberry Pi OS (64-bit)'. Insert the microSSD into the PC and select Storage and then choose the correct storage. Click on the cog: Set credentials, used to manage the pi-hole. Enable SSH Save Click on Write and Yes to the warning message. The writing process takes a while, its exhausting work, go and top up with a coffee. Click continue. If the Format Disk message appears select Canel. Remove the microSD card from the PC and insert it into the Raspberry Pi device. Attach the power and ethernet cables, it will power on automatically. Pi-hole installation There are a couple of options for the initial configuration, including connecting a monitor, keyboard and mouse. I've opted for interrogating DHCP for the IP address of the pi-hole, then reserving. Putty to the to the IP address. Type admin and the password set earlier. The first item on the itinerary is installing the latest patches for Raspberry Pi : sudo apt-get update sudo apt-get upgrade I'm stuck behind a firewall and need to point the pi-hole to an internal timesource. Configure NTP. sudo nano /etc/systemd/timesyncd.conf NTP=192.168.0.249 To save changes. Ctrl + o (output to file) Ctrl + x (exit file) sudo timedatectl set-ntp true sudo reboot Log back on via Putty Installing Pi-hole is one command, followed by a wizard. curl -sSL https://install.pi-hole.net | bash Click Ok to start the Pi-hole configuration. Read and then click Ok. Continue. Yes to set the current IP address assigned. Ignore, the IP has been reserved in DHCP. Select the preferred DNS server or add custom DNS entries. You may wish to consider doubling up on the DNS filtering with the following free services. OpenDNS provides Family Sheild for blocking adult content: 208.67.222.123 208.67.220.123 Cloudflare provides 1.1.1.1 for Families with the following 2 options Malware Blocking Only: 1.1.1.2 1.0.0.2 Malware and Adult Content 1.1.1.3 1.0.0.3 Yes to install the default block list. Yes to install the Admin Web Interface. Yes to install the pre-requisites. Yes to enable logging. Of course, I want to see everything. Make a note of the Web Admin password and Ok. The Web Admin password will be updated to something more complex later. Pi-hole Configuration Open a browser and enter the IP of the Raspberry Pi, enter the Web Admin password. Clearly, the most important issue to resolve is the interface, go to the Web Interface in Tools and set the Start Trek theme. Pi-hole block lists are extensible, consider adding the following adlists. Don't feel it necessary to add all the lists at once, one at a time and test, some lists may be too restrictive and you'll be forever whitelisting. Adaway Default Blocklist: Blocking ads and known tracking domains. https://adaway.org/hosts.txt OISD: Blocks most Ad, Malware, Porn etc. https://oisd.nl/setup EasyList: A popular list that blocks various types of ads. https://easylist.to/easylist/easylist.txt EasyPrivacy: A list that focuses on blocking privacy-invading trackers. https://easylist.to/easylist/easyprivacy.txt MVPS: Blocks ads, banners, and known malicious sites http://winhelp2002.mvps.org/hosts.txt AD Guard DNS Filter: A DNS filter list by AdGuard that blocks ads and trackers. https://adguardteam.github.io/AdGuardSDNSFilter/Filters/filter.txt Chad Mayfield: Porn Filter https://raw.githubusercontent.com/chadmayfield/my-pihole- blocklists/master/lists/pi_blocklist_porn_all.list Click on ADLists and add the URL's. Pi-hole won't automatically block the additional lists, they require processing. Click on Tools and then Update Gravity and Update. Gravity will require monthly checks as the online lists are amended. Updating the Web Admin Password to something a little more complex via Putty. Login with admin and the initial password set in Imager, then type the following. pihole -a -p Maintenance Updating Raspberry Pi and Pi-Hole is essential for security and stability. Regular updates patch vulnerabilities, protecting against cyber threats. They improve system performance and fix bugs. Every month run the following commands by logging in via Putty and the admin account. Update Raspberry Pi OS apt-get update apt-get upgrade Update Pi-hole pihole -up Update Gravity pihole -g Update the Client's DNS Settings Home User For home users, DNS, the bit that resolves domain names to IP addresses is handled by the router, either BT, Virgin or Sky etc. Due to the different types of router and potential configurations I'm unable to provide clear and concise guidance. The router's DNS settings need updating to that of the IP of the pi-hole. My Setup Meh what can I say, it flips between 2 configurations depending on the cost of energy, my preferred setup is definetly off the cards at this moment. Current config, a pair of Pi-holes act as DNS proxies, with forwarders from the Domain Controllers (DC's). All client resolution is via the DC's. Or my preferred setup. The clients point their DNS to a pair of Pi-holes, these pass any queries on to the DC's and finally proxy out via a pair of synology NAS's. The benefit of this config, the Pi-holes log the clients hostnames. The downside is the cost of running the hardware. Thanks for your time and support by reading this blog. If you found it useful, please share.
- Quick Guide to DNSSec
DNSSEC (Domain Name System Security Extensions) is a set of security protocols and cryptographic techniques designed to enhance the security of the Domain Name System (DNS). The main purpose of DNSSEC is to ensure the authenticity, integrity, and confidentiality of DNS data. It addresses certain vulnerabilities in the DNS infrastructure that can be exploited to perform attacks such as DNS spoofing or cache poisoning. These attacks can redirect users to malicious websites or intercept and modify DNS responses, leading to various security risks. DNSSEC achieves its security goals by adding digital signatures to DNS data. Here's a simplified explanation of how it works: DNSSEC uses public-key cryptography to establish a chain of trust. Each domain owner generates a pair of cryptographic keys: a private key and a corresponding public key. The private key is kept secure and used to sign DNS records, while the public key is published in the DNS. The domain owner signs the DNS records with the private key, creating a digital signature. This signature is attached to the DNS record as a new resource record called the RRSIG record. The public key is also published in the DNS as a DNSKEY record. It serves as a verification mechanism for validating the digital signatures. When a DNS resolver receives a DNS response, it can request the corresponding DNSKEY records for the domain. It then uses the public key to verify the digital signature in the RRSIG record. If the signature is valid, the DNS resolver knows that the DNS data has not been tampered with and can be trusted. Otherwise, if the signature is invalid or missing, the resolver knows that the data may have been altered or compromised. By validating DNS data with DNSSEC, users can have increased confidence in the authenticity of the information they receive from DNS queries. It helps prevent attackers from injecting false DNS data or redirecting users to malicious websites by providing a means to detect and reject tampered or forged DNS responses. It's worth noting that DNSSEC requires support and implementation at both the domain owner's side (signing the DNS records) and the DNS resolver's side (validating the signatures). The widespread adoption of DNSSEC is an ongoing effort to improve the security and trustworthiness of the DNS infrastructure.
- Managing Local Admin Passwords with LAPS
What do you do with your local administrator passwords? Spreadsheet on a share or are the passwords the same, the admin account could even be disabled??? LAPS from Microsoft maybe the answer. Its a small program with some GPO settings. LAPS randomly sets the local administrator password for clients and servers across the estate. Firstly download LAPS from the Microsoft site Copy the file to the Domain Controller and ensure that the account you are logged on has 'Schema Admin'. Install only the Management Tools. As its a DC its optional whether to install the 'Fat Client UI', Schema updates should always be performed on a DC directly. Open Powershell and run the following command after seeking approval. Update-AdmPwdSchema SELF will need updating on the OU's for your workstations and servers. Add SELF as the Security Principal. Select 'Write ms-Mcs-AdmPwd Now change the GPO settings on the OU's. The default is 14 characters but I would go higher and set above 20. Install LAPS on a client and select only the AdmPwd GPO Extension On the Domain Controller open the LAPS UI and search and Set a client. Once the password has reset open the properties of the client and check the ms-Mcs-AdmPwd for the new password. Now every 30 days the local Admin password will be automatically updated and unique. Deploy the client with ConfigMgr to remaining estate. By default Domain Admin have access to read the password attribute and this can be delegated to a Security Group. AND.....this is the warning.....Any delegated privileges that allow delegated Computer management and the 'Extended Attributes' can also read the 'ms-MCS-AdmPwd'.
- Tamperproof Seals are they Effective!!!
Objective The old adage, any physical access and you don't own the system, the person at the keyboard does even if they're not authorised...... Do physical protections, in this case, tamperproof or security stickers provide any tangible defence? The objective is to remove the tamperproof sticker, and fiddle, reapplying the sticker without leaving any evidence of my shenanigans. Tamperproof Stickers For the stickers, the obvious choice, Amazon and 3 differing types of stickers were chosen: Rectangular, dog bone and circle\spot. The stickers work using 2 different types of glue, one being more adhesive than the other. When the sticker is removed, the word 'VOID ' adheres to the surface leaving evidence of tampering. More annoyingly the stickers tend to be brittle and cant be removed intact. Tools of the trade Physical implements include syringes, tweezers, scalpel, razor blade and a hairdryer. Solvents to loosen the glues include Isopropyl, WD40 and Alcohol Hand Gel. Items of Value 3 different surfaces have been selected. A Micorserver representing a standard plastic computer case. A Netgear 8 Port switch, with a painted metal case and lastly an Asus Zenbook with brushed Aluminium. The Control A functional test of each sticker was carried out to ensure expected behaviour. Standard PC - Isopropyl Using a sharp needle inject Isopropyl around the edge of the sticker. With the scalpel very carefully lift up a corner whilst injecting Isopropyl. Using sharp instruments during testing caused damage. Swap to blunt needle and tweezers. Very slowly peel back the sticker whilst injecting Isopropyl. That was easier than expected and completed within a few minutes. The sticker was removed with no visible damage. Once the Isopropyl evaporated the sticker was reusable. Standard PC - Hair Dryer Heat the sticker and use the scalpel to lift the corner and then switch to the tweezers. Apply heat continuously or the lifting edge cools, damaging the sticker. The sticker was reusable instantly. Painted Metal - Isopropyl The glue adhered to the paint far more effectively and despite a couple of attempts, I was unable to remove the sticker without damage. Evidence of my failure...... Painted Metal - Hair Dryer Apply constant heat to the sticker allowing its removal without damage. Redemption and back in business.... The sticker was reusable instantly. Brushed Aluminium - Isopropyl Chalk this down to another failure, using Isopropyl was a non-starter. The smooth surface prevented the Isopropyl from getting under the sticker, damage was inevitable. Brushed Aluminium - Hair Dryer Constantly applied heat appears to work better on smoother surfaces. The sticker lifted reasonably easily and quickly without damage. The sticker was reusable instantly. WD40 and Hand Sanitizer...... In theory, the hand sanitizer should've worked, its thickness prevented it from penetrating the glue. A non-starter. WD40 worked almost effectively as the Isopropyl, however, the sticker was not reusable due to the oil content. Findings Size or more importantly circumference to surface area matters, regardless of what you're told...... The spot and dog bone stickers with relatively small surface areas to circumference lifted far easier than the rectangle. The surface of the device being protected influences the performance of the glue. Isopropyl is unable to penetrate smoother or painted surfaces as easily. When in doubt use a hair dryer, heat reduces the adhesion of the glue allowing the sticker to lift and be reapplied. The Old Adage The old adage still applies, if you have physical access you are the owner of the device. Limitations of tests Source of stickers, they're nothing special being from Amazon. At the time of writing the 2 companies approached for their heat and Isopropyl resistant security stickers have yet to reply.......Hoping they come through, I'd like to know if they do offer any extra resistance. Any company feeling brave contact me via the Home contacts form. If the stickers supplied resist my attempts I'll mention the company name as a supplier of superior stickers. As always, thanks for your time, if you enjoy the content please share this site.
- Deny Domain Admins Logon to Workstations
There's a common theme running through many of the security articles on this site. Prevent lateral movement of hackers around the domain searching for escalation points to elevate to Domain Admins. Preventing escalation via cached or actively logged on privileged accounts can be accomplished with segregated tiers between Workstations, Servers and Domain Controllers. Implementing tiers does not prevent exploitation of system vulnerabilities and escalating via an RCE for example. Tier 0 - Domain Admins, CA's, plus any management service running agents on the DC's. Tier 1 - Member Servers. Tier 2 - Workstations. Segregation is achieved with the use of User Rights Assignments (URA) via Group Policy, additional admin accounts and AD groups. The initial concept is easy, don't allow any account access across the boundaries between Workstation, Server or DC. Workstation admin accounts are prevented from logging on to servers and DC's. Server admins or server service accounts are unable to login to a Workstation or DC. Domain Admins never log on to anything but DC's. The theory sounds easy until management agents are installed on DC's. There's the potential for the SCOM or SCCM\MECM admin to fall victim to an attack. The attacker is granted System on the DC's via the agent, despite the admin not being a Domain Admin. I recommend not installing management agents on DC's or CA's. One solution, as this is the real world, install the management applications with an installer account and delegate privileges to the relevant groups and triers, making sure not to cross the streams. Or create an additional tier for management servers with agents deployed to DC's. The downside of tiers is extra accounts. If you're the DA then 3, possibly 4 admin accounts per domain are required. There's no perfect solution or one size fits all, aim to separate the tiers but allow for flex in the solution. The only hard and fast rule is 'never allow any server admin or DA to login to workstations.' Before starting Domain Administrator privileges are required. First create the AD Groups for denying Domain Controller, Server and Workstation logon. Open 'AD Users and Computers' and create the following AD Groups: RA_Domain Controller_DenyLogon RA_Server_DenyLogon RA_Workstation_DenyLogon Create the following accounts: tenaka_wnp (workstation administrator) tenaka_snp (server administrator) tenaka_dnp (domain admin) Going to assume you're happy creating Restrictive Groups in Group Policy and assigning them to OU's. Create the following AD Groups, assigning them to the relevant OU. PR_Workstation_Admins PR_Server_Admins Add tenaka_wnp to PR_Workstation_Admins Add tenaka_snp to PR_Server_Admin Add tenaka_dnp directly to Domain Admins, don't nest groups within Domain Admins. RA_ designates User Rights Assignment. PR_ designates PRivileged account. This is part of a naming convention used within this Domain. Open RA_Workstation_DenyLogon group. Add Domain Admins, all server service accounts and PR_Server_Admin. Create a new GPO for the Workstations OU. Update the following User Rights Assignments with RA_Workstations_DenyLogon. Deny log on as a batch Deny log on as a service Deny log on locally Deny log on through Remote Desktop Services Open the RA_Server_DenyLogon group Add Domain Admins, PR_Workstation_Admin and service accounts not deployed to a server. Svc_scom_mon_ADMP performs synthetic transactions testing the performance of internal websites and DNS lookups. Create a new GPO for the Servers OU Update the following User Rights Assignments with RA_Server_DenyLogon Deny log on as a batch Deny log on as a service Deny log on locally Deny log on through Remote Desktop Services Open the RA_Domain Controller_DenyLogon group. Add PR_Workstation_Admin, PR_Server_Admin and service accounts not used on DC's. Create a new GPO for the Domain Controller container. Update the following User Rights Assignments with RA_Domain Controller_DenyLogon Deny log on as a batch Deny log on as a service Deny log on locally Deny log on through Remote Desktop Services Run gpupdate /force on a workstation, server and domain controller to apply the changes, a restart may be necessary. All that remains is testing. Attempt to login to a workstation with tenaka_wnp, tenaka_snp, tenaka_dnp, the only account that will successfully login is tenaka_wnp. Attempt to logon to the server with tenaka_wnp, tenaka_snp, tenaka_dnp, the only account that will successfully logon is tenaka_snp Attempt to logon to a Domain Controller with tenaka_wnp, tenaka_snp, tenaka_dnp, the only account that will successfully logon is tenaka_dnp
- Living off the Land
Living off the land is a technique used by attackers to compromise IT systems without using malicious software. Instead, they use legitimate but vulnerable applications and services to gain access to a system and carry out their malicious activities. This approach can be incredibly successful and difficult to detect, as attackers do not have to introduce any malicious files or code into the system. Living off the land entails attackers finding out what programs are running on the target system and then exploiting known vulnerabilities in those applications. The goal is to gain access to the system and use it for malicious purposes, such as stealing data or launching a denial-of-service attack. Attackers can use several methods to gain access to the system, including exploiting vulnerable software, using unsecured services, or taking advantage of weak or default passwords. One of the main advantages of living off the land is that it is difficult to detect. Since the attacker is not introducing any malicious code or files into the system, there are often no tell-tale signs that an attack is taking place. It is only when the attacker’s activities are discovered that the attack can be identified. Organizations should be aware of the risks posed by living off the land attacks. They should perform regular security audits to ensure that their systems are up to date and that all applications and services are properly secured. It is also a good idea to change passwords regularly and to monitor for suspicious activity. In addition, organizations should ensure that they have an incident response plan in place, in case a living off the land attack is detected. Living off the land attacks can be highly effective and difficult to detect, but organizations can take steps to protect themselves from these threats. Living off the land attacks can be prevented by implementing robust security policies, awareness training for all users, monitoring of systems and user activity, and using secure protocols and encryption. Additionally, organizations should regularly review their security posture and make security improvements when needed. This could include regularly patching or updating systems, using application whitelisting, and implementing firewalls and other security measures. Additionally, businesses should keep their data and systems secure by using strong passwords, two-factor authentication, and other authentication protocols. Authored by ChatGPT
- Time to geek out.....Home Lab
I've always wondered if other IT Professionals take their work home??? I don't take work home, I take my hobby to work....There is a serious side to this approach, it allows freedom to explore Microsoft and Linux products without constraints and it provides insights into the tech articles vs reality without the constraints of deliverables. The following describes my main home environment. Hardware: Intel NUC's - i7's with 32Gb RAM, 1Tb SSD and 4TB 2.5" SSD Intel NUC Skull Canyon 32Gb RAM, 1Tb SSD VNAND Dell XPS 15 ASUS Zenbook 580 ASUS Zenbook 490 ASUS Zenbook 301LA Synology Nas 4 Bay 8Tb Usable Synology Nas 1 bay 4Tb Usable (Selective Backup) Zyxel USG60W 4 * Odroids UX4 (2 * load-balanced PI Holes) Raspberry Pi 4 Raspberry Pi Zero * 2 Odroid C4 (RAT) Dual Wifi and RJ45 - Kali Rat Various 1Gb switches HP 476MFD Software: Microsoft Action Pack - £470 per year Linux and Pi distros Main infrastructure, doesn't include vm's that are only spun up for testing: NUC1 (HYP1) DC19-1 DC19-2 SCCM-1 NUC2 (HYP2) OPS-1 MDT-1 DC19-3 The diagram below details the internal DNS setup, there's a method to this madness. The 2 Synology NAS's act as DNS proxies performing all-recursive queries, protecting the DC's from connecting directly to the Internet. The Pi Holes are load balanced and placed between the member servers, clients and DC's, enabling hostname resolution in the PiHole logs. Whilst filtering all the nasties away from the clients and servers. NUC's - The powerful and relatively cheap to run Intel NUC's are host servers. Don't criticise they're Hyper-V, there are benefits, more secure than alternatives....bare with me... don't rage, they receive their patches automatically every month from Microsoft. I specialise in Microsoft OS security and am more confident in securing Windows. Hyper-V finally allows me flexibility with migrating vm's across all the NUC's, Laptops and Skull Canyon. Shares and DFS - NUC1 hosts the main bulk of the user shares with shares for Home, Groups and Media, plus a Software Library going all the way back to Windows NT 4 sp3. The shares are presented to the user with GPO preferences. DFS allows moving the data to a new host without the users (my family) being aware. DC's - Windows 2019 Server makes up the Domain Controllers, each Hyper-V host has a DC. The 3rd DC doesn't run any FSMO roles and it's the first to be replaced with a new OS release. Build a new DC alongside and demote the old. No in-place upgrades help keep the DC's clean. SCCM\MECM - Yes I've deployed an enterprise management solution at home. Yes, it does deploy Windows clients and applications and there is the odd, quite a lot, to be honest, compliance rules. Yes, it can deploy Windows Updates, just doesn't any longer. Until a couple of years ago, my main job was as an SCCM engineer. SCOM - Monitors performance of all servers and various synthetic transactions eg the Internet from client to Google. Custom event rules alert for activities that shouldn't happen across all DC's, servers and clients. MDT - Creating gold images of course.... Backups - 2-way replication exists between the Windows Shares and Synology-1. Android phones automatically upload new photos and videos to the NAS, and then replicated them to the Windows Media share. Equally any new content added to the Windows shares is backed up to the NAS. Synology-2 provides a sort of off-site backup, being away from the main house. Clients - Windows 10 clients run the very latest release and are members of the Domain. I don't allow any non-domain joined Windows on the main network. Android is Ok, not Windows and never the head in the sand crapple. Security - It's extensive, from firewalls to GPO, Applocker, Device Guard, IPSec and role separation with AD. Clearly, I'm not going to give too much away, everything is turned up to level 10. That's a very quick overview of the home network.
- How to Merge GPOs with PowerShell
How to merge GPO's with PowerShell, this is a little misleading, PowerShell is used to add the logic to LGPO.exe there's worse to follow. The whole process can't be fully automated and requires manual intervention. Yep, that dreaded word...."Manual". However, the following method does work at merging disparate GPO's for Domain deployment. The Issue: As someone who applies Microsoft's Security GPO baselines in a Domain, it's a little messy importing each of the separate GPO's for Windows, Office and Edge etc. Resulting in multiple Computer and User policies being listed in GPO Management, leading to administrator confusion and even a possible performance hit whilst the client applies those multiple GPO's. What is required is a single Computer or User GPO with all the combined settings. You will require: A non Domain joined Windows client or server for merging of the policies, preferably the same as the GPO's being applied. Download LGPO and PolicyAnalyzer and the latest recommended SCM GPO's from (here). A Domain with Domain Admin rights to import the merged policy. To manage Office and Edge GPO's set up a Central Store (here) and install the latest admx files on both the Domain Controller and the standalone. If this step is missed the settings will appear as extra registry settings in GPO Management. The script to merge policies (here). The Prep:: I'm going all in for the demo and merging Windows 11, Office 365 and Edge policies for both User and Computer. This is not recommended as User and Computer policies should be separated. If you do follow this example link the merged GPO on a Computer OU and then apply the Loopback settings, the user policies will then apply at user logon. Enough waffle... create a folder on the standalone client\server, extract and copy all the GPO's to the folder. Copy both the script and LGPO.exe to the root of that folder. The execution: Execute the script with admin rights either via PowerShell or ISE. The script loops through each of the policy directories LGPO to merge both the User and Computer settings, applying them locally. LGPO then exports the local settings to a GPOBackup directory. Ignore the warnings, it's LGPO throwing its teddy out of the pram. A quick validation of the local policies by filtering 'Configured' policies. The Domain Policy: Copy the merged policy from the MergedGPO directory to the Domain Controller or Management client with 'Group Policy Management' feature installed. Create a new Domain Group Policy, don't link to an OU. Right-click on the new GPO and 'Import Settings'. Warning this will overwrite any existing settings, don't mess this step up. Follow the wizard and select the folder where the merged policies reside. Select the GPO to import. Review the settings and link to the correct OU. Warning, linking the Microsoft recommended policies to any Client, Server or DC will likely result in an outage or services becoming unresponsive, so test first and make any necessary changes. It looks pretty easy and it is.... however.... and this is where a little GPO bravery and the manual intervention kicks in. The Manual Steps: After reviewing the setting you'll notice, LGPO has imported all the Security settings including password and account policies. These are set at the Root of the domain and can't be overridden by placing them at a lower level. From Group Policy Management, select the imported GPO and make a note of the 'Unique ID:' Browse to 'C:\Windows\Sysvol\Domain\Policies' Select the matching 'Unique ID'. Navigate to the 'SecEdit' directory. GptTmpl - Security Settings GptTmpl.inf contains most of the security settings, no Firewall or Applocker policies though. Settings within GptTmpl.inf are the setting that most likely requires removing. There are 2 possible solutions depending on the scenario. If for example only User settings are required.... delete GptTmpl.inf If Password or Account policies aren't required open GptTmpl.inf from an elevated Notepad and remove the excess sections. Registry.pol - Administrative Templates Amending User or Machine Registry.pol files from within isn't so easy and recommend using Group Policy Management as the editor. It is possible to delete the Registry.pol files and this is what I've done. Audit.csv - Advanced Audit Settings Lastly, Advanced Audit settings via the audit.csv file, delete this file as well. The Result: The end result is that all Computer settings are removed, leaving only the User settings. The issue with Client Side Extensions: In some instances during the GPO Policy import, no settings are displayed from within GPO Management. This is due to the GPCMachineExtensionName attribute not writing the correct values at import. In this case, update the GPO values, if Security Options or User Rights Assignments aren't displaying, make changes, apply and revert the change. GPO Management will then successfully display the correct values. If the GPCMachineExtensionName attribute is known the following command can be used. Set-ADObject -Identity $getGPOPath -Replace @{gPCMachineExtensionNames="[{827D319E-6EAC-11D2-A4EA-00C04F79F83A}{803E14A0-B4FB-11D0-A0D0-00A0C90F574B}]"}
- How to Delegate Active Directory OU's with PowerShell
Today is a quick explanation regarding OU delegation using PowerShell with usable examples and how-to located the GUID that identifies the object type being delegated. All the required scripts can be found on my Github (here). Delegated Test Account: For demonstration purposes, the following is executed directly on the Domain Controller and as a Domain Admin. Create a test user named 'SrvOps' and add it to the 'Server Operators', group. This effectively provides Administrator privileges on the DC's without access to AD. Create the following Global Groups, CompDele, UserDele and GroupDele and to the SrvOps user. Greate the following OU's, Computer, User and Group. Shift and Right-click 'Active Directory Users and Computer' and 'Run as a Different User', and enter the SrvOps credentials. Right-click on the Computer OU and you will notice that there's no options to New and select an object type. ADSI Edit and Object GUID: Close the AD snap-in. Back to Domain Admin and launch 'adsiedit.msc'. Select 'Schema' from the 'Select a well known Naming Context:' and OK. Scroll down and select 'CN=Computer' properties. On the 'Attribute Editor' tab scroll down and locate 'schemaIDGUID'. This is the Guid object identity used for delegating Computer objects. It's not possible to copy the value directly and double clicking provides Hex or Binary values which can be copied. The following converts the Hex to the required Guid value. $trim = ("86 7A 96 BF E6 0D D0 11 A2 85 00 AA 00 30 49 E2").replace(" ","") $oct = "$trim" $oct1 = $oct.substring(0,2) $oct2 = $oct.substring(2,2) $oct3 = $oct.substring(4,2) $oct4 = $oct.substring(6,2) $oct5 = $oct.substring(8,2) $oct6 = $oct.substring(10,2) $oct7 = $oct.substring(12,2) $oct8 = $oct.substring(14,2) $oct9 = $oct.substring(16,4) $oct10 = $oct.substring(20,12) $strOut = "$oct4" + "$oct3" + "$oct2" + "$oct1" + "-" + "$oct6" + "$oct5" + "-" + "$oct8" + "$oct7" + "-" + "$oct9" + "-" + "$oct10" write-host $strOut #result = BF967A86-0DE6-11D0-A285-00AA003049E2 The Script: Download the scripts from Github (here) and open with Powershell_ise. Update the DN, the OU path to the Computer OU created earlier. Execute the script and repeat for Users and Groups scripts. Relaunch 'Active Directory Users and Computers' as a different user and enter the SrvOps account credentials. Right-click on each of the OU's and 'New'. You will notice SrvOps can now create objects relative to the name of the OU. Final Considerations: Retrieving the 'schemaIDGUID' from ADSI Edit allows the delegation of pretty much any object type within AD and for the most part a couple of minor tweaks to the scripts provided and your set. Enjoy and if you find this useful please provide some feedback via the homepage's comment box.
- Failure Deploying Applications with SCCM\MECM with Error 0x87d01106 and 0x80070005
I encountered an issue with SCCM\MECM failing to deploy the LAPS application to clients and servers. This was previously working fine but now was failing with a Past Due error in Software Center. The AppEnforce.log produced the only meaningful SCCM error events of 0x87d01106 and 0x80070005. 0x80070005 CMsiHandler::EnforceApp failed (0x80070005). AppProvider::EnforceApp - Failed to invoke EnforceApp on Application handler(0x80070005). CommenceEnforcement failed with error 0x80070005. Method CommenceEnforcement failed with error code 80070005 ++++++ Failed to enforce app. Error 0x80070005. ++++++ CMTrace Error Lookup reported ‘Access denied’ 0x87d01106 Invalid executable file C:\Windows\msiexec.exe CMsiHandler::EnforceApp failed (0x87d01106). AppProvider::EnforceApp - Failed to invoke EnforceApp on Application handler(0x87d01106). CommenceEnforcement failed with error 0x87d01106. Method CommenceEnforcement failed with error code 87D01106 ++++++ Failed to enforce app. Error 0x87d01106. ++++++ CMTrace Error Lookup reported Failed to verify the executable file is valid or to construct the associated command line. Source: Microsoft Endpoint Configuration Manager Interestingly testing revealed that .msi applications, configuration items aka compliance and WDAC policy were affected with .exe deployments remaining unaffected. Executing the install string from the administrator account also worked. Somewhat concerning as SCCM deployments execute as System, the highest privilege possible, yet all application installs failed across the entire domain. At this point, Google is normally your friend..... but the results suggested PowerShell, and the wrong user context, as it's a msi issue, these suggestions were not helpful. Clearly, I'm asking the wrong question...... When in doubt or.... stuck, trawl the eventlogs, the SCCM logs weren't going to give up anything further. Fortunately, in fairly short order the following errors were located in the Windows Defender log. Microsoft Defender Exploit Guard has blocked an operation that is not allowed by your IT administrator. For more information please contact your IT administrator. ID: D1E49AAC-8F56-4280-B9BA-993A6D77406C Detection time: 2023-02-23T21:03:46.265Z User: NT AUTHORITY\SYSTEM Path: C:\Windows\System32\msiexec.exe Process Name: C:\Windows\System32\wbem\WmiPrvSE.exe Target Commandline: "C:\Windows\system32\msiexec.exe" /i "LAPS.x64.msi" /q /qn Parent Commandline: C:\Windows\system32\wbem\wmiprvse.exe -Embedding Involved File: Inheritance Flags: 0x00000000 Security intelligence Version: 1.383.518.0 Engine Version: 1.1.20000.2 Product Version: 4.18.2301.6 Now I know the correct question to ask Google 'D1E49AAC-8F56-4280-B9BA-993A6D77406C', with Attack Surface Reduction (ASR) being the culprit. The following is an extract from the Microsoft page: 'Block process creations originating from PSExec and WMI commands D1E49AAC-8F56-4280-B9BA-993A6D77406C Block process creations originating from PSExec and WMI commands This rule blocks processes created through PsExec and WMI from running. Both PsExec and WMI can remotely execute code. There's a risk of malware abusing the functionality of PsExec and WMI for command and control purposes, or to spread infection throughout an organization's network. Warning Only use this rule if you're managing your devices with Intune or another MDM solution. This rule is incompatible with management through Microsoft Endpoint Configuration Manager because this rule blocks WMI commands the Configuration Manager client uses to function correctly. There is no fix, only a workaround, involving updating the ASR setting Block Mode to Audit Mode in Group Policy. Open GPO Management and locate the ASR rules under Windows Components/Microsoft Defender Antivirus/Microsoft Defender Exploit Guard/Attack Surface Reduction. Open the 'Configure Attack Surface Reduction Rules'. Update value name 'D1E49AAC-8F56-4280-B9BA-993A6D77406C' from 1 to 2. Gpupdate /force to refresh the GPO's on the client, then check the eventlog for 5007 recording the change from Block to Audit Mode. Test an SCCM Application deployment to confirm the fix. One final check of the event log confirming event id 1122 for the deployed application.