Home Notes | AD | AD Tools
Post
Cancel

Notes | AD | AD Tools

Introduction

This post is about understanding how the main tools work, because it is great to achieve an attack but it’s better if you can do it from scratch. So if you know how tools work, then you know how attacks work.

Nmap

Definition

Nmap is a powerful network scanning tool used to discover hosts, services, and vulnerabilities by performing port scanning, OS detection, and more.

Default Use Case

To check if a machine is UP or not, nmap pings the host and if it answers to the ping, then it marks it as UP. If there is no answer, nmap will try connecting to port 80 (http) and 443 (https) via a TCP connection.

By default, nmap scans the 1000 TCP most used ports on the given target (ex: nmap $TARGET). If we read the documentation, we see that it divides ports into six states: open, closed, filtered, unfiltered, open|filtered, or closed|filtered.

StateDescription
openAn application is actively accepting TCP connections, UDP datagrams or SCTP associations on this port. Finding these is often the primary goal of port scanning. Security-minded people know that each open port is an avenue for attack. Attackers and pen-testers want to exploit the open ports, while administrators try to close or protect them with firewalls without thwarting legitimate users. Open ports are also interesting for non-security scans because they show services available for use on the network.
closedA closed port is accessible (it receives and responds to Nmap probe packets), but there is no application listening on it. They can be helpful in showing that a host is up on an IP address (host discovery, or ping scanning), and as part of OS detection. Because closed ports are reachable, it may be worth scanning later in case some open up. Administrators may want to consider blocking such ports with a firewall. Then they would appear in the filtered state, discussed next.
filteredNmap cannot determine whether the port is open because packet filtering prevents its probes from reaching the port. The filtering could be from a dedicated firewall device, router rules, or host-based firewall software. These ports frustrate attackers because they provide so little information. Sometimes they respond with ICMP error messages such as type 3 code 13 (destination unreachable: communication administratively prohibited), but filters that simply drop probes without responding are far more common. This forces Nmap to retry several times just in case the probe was dropped due to network congestion rather than filtering. This slows down the scan dramatically.
unfilteredThe unfiltered state means that a port is accessible, but Nmap is unable to determine whether it is open or closed. Only the ACK scan, which is used to map firewall rulesets, classifies ports into this state. Scanning unfiltered ports with other scan types such as Window scan, SYN scan, or FIN scan, may help resolve whether the port is open.
open|filteredNmap places ports in this state when it is unable to determine whether a port is open or filtered. This occurs for scan types in which open ports give no response. The lack of response could also mean that a packet filter dropped the probe or any response it elicited. So Nmap does not know for sure whether the port is open or being filtered. The UDP, IP protocol, FIN, NULL, and Xmas scans classify ports this way.
closed|filteredThis state is used when Nmap is unable to determine whether a port is closed or filtered. It is only used for the IP ID idle scan.

By default Nmap will use the TCP SYN scan (-sS flag). It requires root privileges to run. It will send a SYN packet and wait for an answer from the target. If it receive a SYN/ACK, it means that the port is listening (open). If the response is RST (reset) then the port is marked as closed (non-listenener). If no response is received then the port is marked as filtered.

The SYN scan won’t finish the 3-way-handshake. It will not send the ACK after receiving the SYN/ACK.

More options

-sU: This flag does an UDP scan instead of a TCP one. It will send specific payload to port having specific protocols (DNS, SNMP, ICMP) or empty packets for every other ports. This will allow to speed up a little UDP scans. UDP scans are slow because we can’t be sure that the machine received the packet we sent or not because they don’t make the 3-way-handshake as TCP does, some packets may get lost, because of timeouts or because of some protections like firewalls or IDS. Closed port often send an ICMP port unreachable error, but many hosts rate limit ICMP port unreachable messages by default. For example, the Linux 2.4.20 kernel limits destination unreachable messages to one per second (in net/ipv4/icmp.c). This will make a 65536 port scan last 18 hours. And as the man says Nmap detects rate limiting and slows down accordingly to avoid flooding the network with useless packets that the target machine will drop..

You can speed up your UDP by running it on multiple hosts at the same time, scanning from behind the firewall, and using --host-timeout to skip slow hosts.

-sT: The TCP connect scan replace the default SYN when it is not applicable. It is less stealthy because it will do the full 3-way-handshake (SYN -> SYN/ACK -> ACK), close the connection (FIN -> FIN/ACK -> ACK) and may trigger alerts on the audited systems. The main advantage of this scan is that it doesn’t require root privilege. An other advantage is that it does the 3-way-handshake and close the connection, thus being less brutal than the SYN scan that will not close its connection.

-Pn: Sometimes you get an error from nmap that tells you to run with this option because the machine appears to be down for nmap. This option is the No ping scan. This means that it will not ping the host to check if it is UP or not but instead, it will mark it as UP without verification.

-sV: The Version detection is based on response received by nmap when it does its scan. It has more than 6500 pattern on over 650 protocols. Those information are stored in the nmap-service-rprobe database that contains probes for querying various services and match expressions to recognize and parse responses.

-sC: You can run nmap with this flag to run the default set of script. This uses the Nmap Scripting Engine (NSE) and you can find information on the different categories of script here. Those scripts are wrote using the LUA programming language. Nmap is very flexible allowing you to write and use your own script with it.

To provide your own script either run the --script $FILENAME or in /usr/share/nmap/scripts/. You can provide the * regex to select all scripts that start with a word (ex: ftp-*). You can also use default, intrusive or safe scans with mathematical and and or (ex: --script "(default or safe or intrusive) and not http-*")

-O: The OS detection flag allows us to get more information about the OS of the target like the name (windows, linux, mac…), the version, the device type (router, switch…). It is based on a database (nmap-os-db) of known OS fingerprints. Nmap will connect to the host and check if there is a known fingerprint for it, if so the information will be printed out.

Ffuf, gobuster…

Definition

All the tools like Ffuf, gobuster, dirsearch… are used to brute-force directories, files, domains and parameters on web servers to discover hidden content or vulnerabilities.

Here I am going to talk about the vhost, the subdomain and page scanning.

Because web severs can host multiple websites, we can chose which website to show for which domain. This is virtual hosting. So the server is the one that will answer to your request. To specify which web site you want to reach, you specify it in the Host: XXX HTTP header.

Because we want people to access our vhosts on our server, we create a DNS record of those so they have a corresponding IP linked to them.

In summary:

FeatureSubdomainsVirtual Hosts (vhosts)
DefinitionPrefixes added to a domain in DNS (e.g., sub.domain.com)Server configuration that serves multiple sites on one server
PurposeOrganize sections of a domain or services.
Also allows to create a DNS record for a specific IP address.
Host multiple websites on the same server.
DNS RelationConfigured in DNSConfigured on the web server (Apache, Nginx, etc.)
Exampleblog.example.com, shop.example.comWeb server hosting both example.com and another.com on the same IP

If you like pretty outputs with emojis and stuff, check out feroxbuster or dirsearch

To be able to detect if a subdomain exists or not, tools like ffuf or gobuster will do a DNS query (usually an A record lookup) to check if the the subdomains resolves to an IP address or not. If and IP is found, then the tool can contact it and start scanning it.

For vhost detection, we do a basic HTTP request specifying Host: XXX , where XXX is the vhost we want to test. Depending on the result we receive from the server, we will guess if the vhost exist or not. For example, with ffuf, we exclude the response length we receive from the first list of vhost and that we know are not real ones.

In summary:

Enumeration TypeProtocol UsedWhat is SentWhat is Expected
Subdomain EnumerationDNS (UDP/53)DNS queriesIP address for valid subdomains (A or CNAME records)
Vhost EnumerationHTTP/HTTPS (TCP/80 or 443)HTTP requests with different Host headersValid virtual host content or errors

For page or directory scanning, those tools only do an HTTP request (DNS resolution automatically done to get the IP of the domain) and based on the response (200, 301, 404…) they tell you if you have access, if it is forbidden for you, if the page does not exist… You can also filter by page size, number of words…

A web server can chose not to respect the standard of the HTTP status code and return a status code 200 even if the webpage doesn’t exists. We can then filter by the size of the response from the server, number of words…

In summary:

Status CodeCategoryDescription
100InformationalContinue: The server has received the request headers, and the client should proceed to send the request body.
101InformationalSwitching Protocols: The server is switching to a different protocol as requested by the client.
200SuccessOK: The request was successful, and the server returned the requested data.
201SuccessCreated: The request was successful, and a new resource has been created.
202SuccessAccepted: The request has been accepted for processing, but the processing is not complete.
204SuccessNo Content: The request was successful, but the server is not returning any content.
301RedirectionMoved Permanently: The requested resource has been moved to a new permanent URL.
302RedirectionFound (Temporary Redirect): The requested resource is temporarily at a different URL.
304RedirectionNot Modified: The cached version of the requested resource can be used.
400Client ErrorBad Request: The request was invalid or cannot be understood by the server.
401Client ErrorUnauthorized: Authentication is required to access the resource.
403Client ErrorForbidden: The server understood the request, but it refuses to authorize it.
404Client ErrorNot Found: The server could not find the requested resource.
405Client ErrorMethod Not Allowed: The HTTP method used is not allowed for the requested resource.
408Client ErrorRequest Timeout: The server timed out waiting for the client to send the request.
418Client ErrorI’m a teapot. Used for requests they do not wish to handle
429Client ErrorToo Many Requests: The client has sent too many requests in a given period.
500Server ErrorInternal Server Error: The server encountered an error and could not complete the request.
502Server ErrorBad Gateway: The server was acting as a gateway or proxy and received an invalid response from the upstream server.
503Server ErrorService Unavailable: The server is temporarily unavailable, usually due to maintenance or overload.
504Server ErrorGateway Timeout: The server was acting as a gateway and did not receive a timely response from the upstream server.

Default Use Case

Ffuf

For the vhost scan you can fuzz like this (where 4242 is the size of the response for a bad `vhost):

1
ffuf -w /path/to/vhost/wordlist -u https://target -H "Host: FUZZ.target" -fs 4242

For subdomains you can run the following command:

1
ffuf -w /path/to/dns/wordlist -u https://FUZZ.target

And for page or directory listing you can just run the following:

1
ffuf -w /path/to/dir/wordlist -u https://target/FUZZ

Gobuster

For the vhost scan you can fuzz like this (where 4242 is the size of the response for a bad `vhost):

1
gobuster vhost -w /path/to/vhost/wordlist -u https://target

For subdomains you can run the following command:

1
gobuster dns -w /path/to/dns/wordlist -d $DOMAIN

You may want to add the -i option so that you have the IP address of the found domains printed out.

And for page or directory listing you can just run the following:

1
gobuster dir -w /path/to/dir/wordlist -u https://target

More options

Ffuf

-fc, -fl, -fs, -fw: You can filter responses by a lot of things like status code (-fc), the amount of line in the response (-fl), the response size (-fs), the amount of words (-fw) and much more…

-recusion: This allows us to do recursive search on the found folders.

If you know that there will be a lot of folders, you can add -recusion-depth X where X is the maximum depth where Fuff is going to check.

-e $A,$B,$C: This allows you to provide one or multiple extensions ($A,$B,$C looks like .htlm,.php,.txt)

-t 50: If you want to speed up your scanning, you can increase the number of threads.

Don’t put a number of thread too big. It may cause the site to crash, ban you or give you a lot of errors.

For more information about this tool, you can check codingo website.

Gobuster

-x $A,$B,$C: This allows you to provide one or multiple extensions ($A,$B,$C looks like htlm,php,txt)

Pay attention to the fact that in ffuf extension we use the notation -e .EXTENSION but in gobuster we only provide the extension without the dot (-x EXTENSION).

-t 50: If you want to speed up your scanning, you can increase the number of threads.

Responder

Definition

Responder focuses on capturing authentication credentials from protocols such as SMB, HTTP or LDAP by spoofing domain names or listening for incoming domain name resolution with protocols like LLMNR (Link-Local Multicast Name Resolution), NBT-NS (NetBIOS Name Service), and mDNS (Multicast DNS).

Responder comes into two modes:

  • Listening/passive mode (-A, --analyze flag):
    Responder waits for “someone” (a computer not a user) to do a name resolution and responds as the host to be resolved as

  • Active mode:
    Responder sets up a SMB, HTTP or WPAD server to get authentication of the user → Spoof protocols like LLMNR, NBT-NS, mDNS → Then, the client sends authentication credentials such as NTLM challenge via protocols like SMB that we then can crack offline or use them for relay

LLMNR, NBT-NS, mDNS Poisoning: Spoof LLMNR, NBT-NS or mDNS protocols to asnwer to the victim and capture challenges (Net-NTLM hashes). • SMB and HTTP Server: Acts as a fake SMB or HTTP server to capture NTLM hashes.
WPAD Rogue Proxy Server: Captures web proxy auto-discovery protocol requests and redirects them to capture Net-NTLM hash.

Default Use Case

The default use case is responder -I $INTERFACE where $INTERFACE is the interface linked to your network.

By default, Responder will start poisoning on the name resolution protocols LLMNR, NBT-NS, MDNS and DNS:

[Tools_Resp_Poisoners.png]

And it opens several servers on your machine (SMB, HTTP, FTP…) for users to connect to:

[Tools_Resp_Servers.png]

Once you launch it, wait for someone to connect to you and try to authenticate. You can capture NTLM hash and try to crack them with hashcat or relay them with ntlmrelayx:

[Tools_Resp_Hash.png]

More options

You can append -wd to the initial command. the -w allows you to start the WPAD rogue proxy server and the -d allows you to answer for DHCP broadcast requests.

Note that the -d option will inject a WPAD server in the DHCP response

ntlmrelayx

Definition

Ntlmrelayr is a tool that relays NTLM authentication requests to different services, allowing an attacker to gain unauthorized access through relay attacks. The targets we will be looking for are any server on a domain that doesn’t have the signing set to True (protocols not signed).

To have better understanding on the NTLM authentication mechanism you can check here and for the relay attack, check there

Default Use Case

Ntlmrelayx will relay received connections to the correct server allowing us to get authenticated connections. You can run the following command :

ntlmrelayx -tf targets.txt -of netntlm -smb2support -socks

-tf $TARGETS contains the list of the servers you want to relay to. -of $FILENAME will save the response that is based on the client hash so you can try to crack them later -smb2support allows us to support SMB v2 -socks starts a socks proxy to run commands via received authenticated connections. This will allow us to execute commands as a specific user without knowing his password

With those options enabled, we can see the incoming successful authentications :

[Tools_ntlmrelayx_cmd.png]

Those connections come from Reponder that send them to ntlmrelayx for relay

By running the command socks, we can see all spoofed users and if they are admin or not of the server:

[Tools_ntlmrelayx_socks.png] ntlmrelayx will create a session for each relayed connection (as we can see above). Other tools using proxies like proxychains should check on local port 1080 (by default on ntlmrelayx) if a session is available for the specific IP and port. If so, it will try to connect to it but it may need admin rights depending on the action we want to realize (AdminStatus set to TRUE).

Because we use the option socks, we can now run commands with proxychains like a secretsdump with the admin user:

[Tools_ntlmrelayx_secdump.png]

You can use tools from impacket using the -no-pass option if you are using a relayed connection. If the option is not available, just specify an empty password and it should work the same. Only SMB and LDAP can be relayed.

Here, we used ntlmrelayx with responder but you can use it with coercer to force the connection instead of waiting for it.

Tools like nxc (previously cme) don’t have the -no-pass option. Just specify an empty password (or whatever you want, it will be ignored in any case) and it should be OK:

[Tools_ntlmrelayx_nxc.png]

More options

-t $PROTOCOL://$SERVER if you want to relay to other protocols, you can specify it directly in the command line with this option.

It is also possible to prepend it in the file when we use the -tf option. We will have lines like ldap://192.168.56.12 or smb://192.168.56.12.

Note that when we used the tool to relay toSMB we could create socks to have a stable connection. This is not the case when we relay to LDAP. You will have to use the tools option to specify what to do like --add-computer $COMPUTER_NAME.

-l $LOOT_DIR will dump the looted information in this $LOOT_DIR like SAM or LDAP information. By default, if a relayed connection allows an LDAP connection then ntlmrelayx will generate files based on the information found on the LDAP using ldapdomaindump (e.g. list of computer by OS, list of users with their description, list domain policies…)

lsassy

Definition

Lsassy is a post-exploitation tool designed to extract Windows credentials (LSASS dumps) remotely without touching disk, minimizing detection.

LSASS is a process containing the last logged on users (local or domain). It checks user logging in, handles password change and create access token.

The use of credential guard allows to isolate the process to protect it. For additional protection you can check this post of Microsoft.

Lsassy will recover passwords remotely thanks to either ProcDump (Microsoft tool of the Sysinternals suite), or via the DLL of Windows comsvcs.dll that allows to dump process, but only if you are SYSTEM.

This tool was developed by pixis so you can check his blog on hackndo if you want more information about the tool.

Default Use Case

You can run lsassy when you have an admin user on a server:

lsassy -d $DOMAIN -u $USER -p $PASSWORD $SERVER_IP

or using nxc / cme:

nxc smb $SERVER_IP -d $DOMAIN -u $USER -p $PASSWORD -M lsassy

[Tools_Lsassy.png]

More options

-m allows you to specify the dumping method like dumpertdll and many others. You’ll need to specify the path of the module (dll or binary) you are using with -O dumpertdll_path=$PATH_OF_THE_DLL to evade AV. You can find the code to compiler on the dumpert GitHub repository.

Secretsdump

Definition

Secretsdump is a tool to extract sensitive information like password hashes from remote systems without executing code on the target machine.

You will need admin rights to be able to run this script successfully

  • It dumps SAM and LSA (SECURITY) from Registery and save the result in temp directory.
  • It also dumps NTDS (only on DC). It gets the list of domain users, their hashs and Kerberos keys via [MS-DRDS] DRSGetNCChanges() or via vssadmin
  • After dumping everything, it cleans his traces by removing any created files.

Default Use Case

You can run Secretsdump with only a user and a password (or hash):

secretsdump $DOMAIN$/$USER:$PASSWORD@$SERVER_IP

[Tools_Secretsdump.png]

More options

-hashes you may want to use this flag if you don’t have the password of the admin and use its hash instead.

testssl.sh

Definition

testssl.sh is a tool that checks SSL/TLS configurations on servers to detect vulnerabilities or misconfigurations related to encryption and security protocols.

  1. First testssl.sh does a DNS lookup to find possibly multiple IP addresses. After that, it does a reverse DNS lookup to find the corresponding domain from the IPs. Then, it will check that the found targets have indeed SSL/TLS on port 443 unless another one is pecified
  2. It will check the SSL/TLS protocols. The service should not provide deprecated protocols like SSLv2, SSLv3, TLS 1 or TLS 1.1. Only TLS 2 or TLS 3 should be offered
  3. Then it will check if any outdated ciphers, such as RC4, DES or 3DES are used and show stronger ciphers like AES or AEAD cipher.
  4. It will also check the Perfect Forward Secrecy (PFS). If this is not supported, this mean that if an attacker can find the private key of the server, then it will be able to decode every communications. But if PFS is supported and the private key is found, this means that only the new messages can be read. Ciphers like DHE (Diffie-Hellman Ephemeral) or ECDHE (Elliptic Curve Diffie-Hellman Ephemeral) should be used.
  5. Checks if the server has a server-side cipher preferences enable. This allows to not allow communicate with deprecated cipher mechanism hence allowing only secure communications.
  6. The server default section provide a lot of information about the certificate(s) like the issuer, validity, expiration… It will also check that the server use secure session managment (session ID or session tickets).
  7. Then it will check the HTTP headers. This includes headers like Strict-Transport-Security (HSTS) X-Frame-Options and Content-Security-Policy (CSP), which help secure web applications against common attacks like XSS, clickjacking and man-in-the-middle attacks.
  8. Now it will check for SSL/TLS vulnerabilities like Heartbleed, POODLE, BEAST, DROWN, ROBOT, Logjam.
  9. After that, testssl.sh will check, based on a list of 370 pre-configured cipher, that the server supports strong and non-obsolete ones.
  10. The last step will do a client simulation that mimic how different clients (web browsers, mobile devices, different OS…) react with the server. This allows to check that the server is not vulnerable to downgrade attacks.
  11. Finally, testssl.sh will provide a grade to allow us to have a global idea of the certificate security level.

Certificate

To have a better understanding of what does testssl.sh, I’ll just explain a bit what’s in a certificate also known as X.509 certificate.

  • The first part of the certificate when we open it is about the subject information. Common name of the website that the certificate is intended to protect (exemple.com or *.exemple.com)
  • Then we have information about the issuer (ex: Common name of the issuer)
  • We then encounter information about the certificate’s serial number (unique positive integer to identify the certificate), the X.509 version (ex: 3), the signature algorithm (ex: SHA-256 with RSA) and the validity period.
  • The public key is part of a key pair that also includes a private key. The public key allows the client (our browser) to encrypt the communication with the server. The private key is kept by the server to sign documents. or decrypt data.

Signed documents with the private key can be verified by anyone who has the public key.

  • The digital signature ensure authenticity (signature only generated by the Certificate Authority) and integrity (if the certificate is tempered, then the digital signature won’t match). The digital signature of an X.509 certificate is calculated by first creating a hash of the certificate’s data using a cryptographic hash function (like SHA-256). This hash is then encrypted with the private key of the certificate issuer (the Certificate Authority) to generate the signature, which can be verified by others using the issuer’s public key.
  • We can have exensions that allows the owner of the certificate to have multiple domains bound to the same certificate.

Your browser may calculate a fingerprint of the certificate to check its authenticity.

The chain of trust as its name state is a trust between multiple certificates. A Root CA at the top, which issues certificates to Intermediate CAs, and these Intermediate CAs in turn issue certificates to end entities, forming a hierarchical structure that establishes trust through verification of each certificate’s authenticity and validity up to the Root CA. We may have the following diagram:

[Tools_testssl.png]

The use of intermediate certificates allow better security. This allows root CA not to be exposed and so, to protect their private key by exposing intermediate CA. If an intermediate CA is compromised then it can just be revoked.

Default Use Case

You can use testssl.sh without specifying any flag and it will give you information about the certificate and possible vulnerabilities. You can use it like this:

testssl.sh $TARGET

More options

--file <fname> or the equivalent -iL <fname> are mass testing options. Allows to to test multiple domains.

--basicauth <user:pass> This can be set to provide HTTP basic authentication credentials which are used during checks for security headers. BASICAUTH is the ENV variable you can use instead.

--ip <ip> tests either the supplied IPv4 or IPv6 address instead of resolving host(s) in <URI>. IPv6 addresses need to be supplied in square brackets. --ip=one means: just test the first A record DNS returns (useful for multiple IPs). If -6 and --ip=one was supplied an AAAA record will be picked if available. The --ip option might be also useful if you want to resolve the supplied hostname to a different IP, similar as if you would edit /etc/hosts or /c/Windows/System32/drivers/etc/hosts. --ip=proxy tries a DNS resolution via proxy.

--wide mode expands the cipher suite testing to include a larger and more exhaustive set of cipher suites.

DonPAPI

Definition

DonPAPI is a post-exploitation tool used to extract credentials and sensitive data from Windows machines by targeting the Data Protection API (DPAPI). DPAPI allows the management of symmetric encryption of the secrets in a Windows environment. What we need to understand is that the process of storing secrets generate multiple files. It creates a blob in C:\Users\$USERNAME\AppData\Roaming\Microsoft\Credentials that contains raw bytes and the GUID of the master key file (located in C:\Users\$USERNAME\AppData\Roaming\Microsoft\Protect\$USER_SID). With those two files and the password of $USERNAME we can recover any “protected” password that used DPAPI

You can recover also DPAPI credentials from other users if you are a local or a domain administrator.

You can provide the Net-NTLM hash instead of the password if you only have that.

Here is a French explanation on how does DPAPI works and here is the English one ;)

Default Use Case

By default, donPAPIwill collect the following credentials:

  • Chromium: Chromium browser Credentials, Cookies and Chrome Refresh Token

  • Certificates: Windows Certificates

  • CredMan: Credential Manager

  • Firefox: Firefox browser Credentials and Cookies

  • MobaXterm: Mobaxterm Credentials

  • MRemoteNg: MRemoteNg Credentials

  • RDCMan: RDC Manager Credentials

  • Files: Files on Desktop and and Recent folder

  • SCCM: SCCM Credentials

  • Vaults: Vaults Credentials

  • VNC: VNC Credentials

  • Wifi: Wifi Credentials

You can use it with only the username and his password:

DonPAPI collect -d $DOMAIN -u $USER -p $PASSWORD -t $TARGET_IP

More options

donpapi gui: allows to see the found credentials in a web GUI.

You can use the --basic-auth $USER:$PASSWORD with the gui option so that not everyone has access to it.

-H LMHASH:NTHASH allows you to provide hashes instead of passwords

This post is licensed under CC BY 4.0 by the author.