Introduction
This post is about understanding how the main tools work, because it is great to achieve an attack but it’s better if you can do it from scratch. So if you know how tools work, then you know how attacks work.
Nmap
Definition
Nmap is a powerful network scanning tool used to discover hosts, services, and vulnerabilities by performing port scanning, OS detection, and more.
Default Use Case
To check if a machine is UP or not, nmap pings the host and if it answers to the ping, then it marks it as UP. If there is no answer, nmap will try connecting to port 80 (http) and 443 (https) via a TCP connection.
By default, nmap scans the 1000 TCP most used ports on the given target (ex: nmap $TARGET). If we read the documentation, we see that it divides ports into six states: open, closed, filtered, unfiltered, open|filtered, or closed|filtered.
| State | Description |
|---|---|
| open | An application is actively accepting TCP connections, UDP datagrams or SCTP associations on this port. Finding these is often the primary goal of port scanning. Security-minded people know that each open port is an avenue for attack. Attackers and pen-testers want to exploit the open ports, while administrators try to close or protect them with firewalls without thwarting legitimate users. Open ports are also interesting for non-security scans because they show services available for use on the network. |
| closed | A closed port is accessible (it receives and responds to Nmap probe packets), but there is no application listening on it. They can be helpful in showing that a host is up on an IP address (host discovery, or ping scanning), and as part of OS detection. Because closed ports are reachable, it may be worth scanning later in case some open up. Administrators may want to consider blocking such ports with a firewall. Then they would appear in the filtered state, discussed next. |
| filtered | Nmap cannot determine whether the port is open because packet filtering prevents its probes from reaching the port. The filtering could be from a dedicated firewall device, router rules, or host-based firewall software. These ports frustrate attackers because they provide so little information. Sometimes they respond with ICMP error messages such as type 3 code 13 (destination unreachable: communication administratively prohibited), but filters that simply drop probes without responding are far more common. This forces Nmap to retry several times just in case the probe was dropped due to network congestion rather than filtering. This slows down the scan dramatically. |
| unfiltered | The unfiltered state means that a port is accessible, but Nmap is unable to determine whether it is open or closed. Only the ACK scan, which is used to map firewall rulesets, classifies ports into this state. Scanning unfiltered ports with other scan types such as Window scan, SYN scan, or FIN scan, may help resolve whether the port is open. |
| open|filtered | Nmap places ports in this state when it is unable to determine whether a port is open or filtered. This occurs for scan types in which open ports give no response. The lack of response could also mean that a packet filter dropped the probe or any response it elicited. So Nmap does not know for sure whether the port is open or being filtered. The UDP, IP protocol, FIN, NULL, and Xmas scans classify ports this way. |
| closed|filtered | This state is used when Nmap is unable to determine whether a port is closed or filtered. It is only used for the IP ID idle scan. |
By default Nmap will use the TCP SYN scan (-sS flag). It requires root privileges to run. It will send a SYN packet and wait for an answer from the target. If it receive a SYN/ACK, it means that the port is listening (open). If the response is RST (reset) then the port is marked as closed (non-listenener). If no response is received then the port is marked as filtered.
The
SYNscan won’t finish the 3-way-handshake. It will not send theACKafter receiving theSYN/ACK.
More options
-sU: This flag does an UDP scan instead of a TCP one. It will send specific payload to port having specific protocols (DNS, SNMP, ICMP) or empty packets for every other ports. This will allow to speed up a little UDP scans. UDP scans are slow because we can’t be sure that the machine received the packet we sent or not because they don’t make the 3-way-handshake as TCP does, some packets may get lost, because of timeouts or because of some protections like firewalls or IDS. Closed port often send an ICMP port unreachable error, but many hosts rate limit ICMP port unreachable messages by default. For example, the Linux 2.4.20 kernel limits destination unreachable messages to one per second (in net/ipv4/icmp.c). This will make a 65536 port scan last 18 hours. And as the man says Nmap detects rate limiting and slows down accordingly to avoid flooding the network with useless packets that the target machine will drop..
You can speed up your
UDPby running it on multiple hosts at the same time, scanning from behind the firewall, and using--host-timeoutto skip slow hosts.
-sT: The TCP connect scan replace the default SYN when it is not applicable. It is less stealthy because it will do the full 3-way-handshake (SYN -> SYN/ACK -> ACK), close the connection (FIN -> FIN/ACK -> ACK) and may trigger alerts on the audited systems. The main advantage of this scan is that it doesn’t require root privilege. An other advantage is that it does the 3-way-handshake and close the connection, thus being less brutal than the SYN scan that will not close its connection.
-Pn: Sometimes you get an error from nmap that tells you to run with this option because the machine appears to be down for nmap. This option is the No ping scan. This means that it will not ping the host to check if it is UP or not but instead, it will mark it as UP without verification.
-sV: The Version detection is based on response received by nmap when it does its scan. It has more than 6500 pattern on over 650 protocols. Those information are stored in the nmap-service-rprobe database that contains probes for querying various services and match expressions to recognize and parse responses.
-sC: You can run nmap with this flag to run the default set of script. This uses the Nmap Scripting Engine (NSE) and you can find information on the different categories of script here. Those scripts are wrote using the LUA programming language. Nmap is very flexible allowing you to write and use your own script with it.
To provide your own script either run the
--script $FILENAMEor in/usr/share/nmap/scripts/. You can provide the*regex to select all scripts that start with a word (ex:ftp-*). You can also usedefault,intrusiveorsafescans with mathematicalandandor(ex:--script "(default or safe or intrusive) and not http-*")
-O: The OS detection flag allows us to get more information about the OS of the target like the name (windows, linux, mac…), the version, the device type (router, switch…). It is based on a database (nmap-os-db) of known OS fingerprints. Nmap will connect to the host and check if there is a known fingerprint for it, if so the information will be printed out.
Ffuf, gobuster…
Definition
All the tools like Ffuf, gobuster, dirsearch… are used to brute-force directories, files, domains and parameters on web servers to discover hidden content or vulnerabilities.
Here I am going to talk about the vhost, the subdomain and page scanning.
Because web severs can host multiple websites, we can chose which website to show for which domain. This is virtual hosting. So the server is the one that will answer to your request. To specify which web site you want to reach, you specify it in the Host: XXX HTTP header.
Because we want people to access our vhosts on our server, we create a DNS record of those so they have a corresponding IP linked to them.
In summary:
| Feature | Subdomains | Virtual Hosts (vhosts) |
|---|---|---|
| Definition | Prefixes added to a domain in DNS (e.g., sub.domain.com) | Server configuration that serves multiple sites on one server |
| Purpose | Organize sections of a domain or services. Also allows to create a DNS record for a specific IP address. | Host multiple websites on the same server. |
| DNS Relation | Configured in DNS | Configured on the web server (Apache, Nginx, etc.) |
| Example | blog.example.com, shop.example.com | Web server hosting both example.com and another.com on the same IP |
If you like pretty outputs with emojis and stuff, check out feroxbuster or dirsearch
To be able to detect if a subdomain exists or not, tools like ffuf or gobuster will do a DNS query (usually an A record lookup) to check if the the subdomains resolves to an IP address or not. If and IP is found, then the tool can contact it and start scanning it.
For vhost detection, we do a basic HTTP request specifying Host: XXX , where XXX is the vhost we want to test. Depending on the result we receive from the server, we will guess if the vhost exist or not. For example, with ffuf, we exclude the response length we receive from the first list of vhost and that we know are not real ones.
In summary:
| Enumeration Type | Protocol Used | What is Sent | What is Expected |
|---|---|---|---|
| Subdomain Enumeration | DNS (UDP/53) | DNS queries | IP address for valid subdomains (A or CNAME records) |
| Vhost Enumeration | HTTP/HTTPS (TCP/80 or 443) | HTTP requests with different Host headers | Valid virtual host content or errors |
For page or directory scanning, those tools only do an HTTP request (DNS resolution automatically done to get the IP of the domain) and based on the response (200, 301, 404…) they tell you if you have access, if it is forbidden for you, if the page does not exist… You can also filter by page size, number of words…
A web server can chose not to respect the standard of the HTTP status code and return a status code 200 even if the webpage doesn’t exists. We can then filter by the size of the response from the server, number of words…
In summary:
| Status Code | Category | Description |
|---|---|---|
| 100 | Informational | Continue: The server has received the request headers, and the client should proceed to send the request body. |
| 101 | Informational | Switching Protocols: The server is switching to a different protocol as requested by the client. |
| 200 | Success | OK: The request was successful, and the server returned the requested data. |
| 201 | Success | Created: The request was successful, and a new resource has been created. |
| 202 | Success | Accepted: The request has been accepted for processing, but the processing is not complete. |
| 204 | Success | No Content: The request was successful, but the server is not returning any content. |
| 301 | Redirection | Moved Permanently: The requested resource has been moved to a new permanent URL. |
| 302 | Redirection | Found (Temporary Redirect): The requested resource is temporarily at a different URL. |
| 304 | Redirection | Not Modified: The cached version of the requested resource can be used. |
| 400 | Client Error | Bad Request: The request was invalid or cannot be understood by the server. |
| 401 | Client Error | Unauthorized: Authentication is required to access the resource. |
| 403 | Client Error | Forbidden: The server understood the request, but it refuses to authorize it. |
| 404 | Client Error | Not Found: The server could not find the requested resource. |
| 405 | Client Error | Method Not Allowed: The HTTP method used is not allowed for the requested resource. |
| 408 | Client Error | Request Timeout: The server timed out waiting for the client to send the request. |
| 418 | Client Error | I’m a teapot. Used for requests they do not wish to handle |
| 429 | Client Error | Too Many Requests: The client has sent too many requests in a given period. |
| 500 | Server Error | Internal Server Error: The server encountered an error and could not complete the request. |
| 502 | Server Error | Bad Gateway: The server was acting as a gateway or proxy and received an invalid response from the upstream server. |
| 503 | Server Error | Service Unavailable: The server is temporarily unavailable, usually due to maintenance or overload. |
| 504 | Server Error | Gateway Timeout: The server was acting as a gateway and did not receive a timely response from the upstream server. |
Default Use Case
Ffuf
For the vhost scan you can fuzz like this (where 4242 is the size of the response for a bad `vhost):
1
ffuf -w /path/to/vhost/wordlist -u https://target -H "Host: FUZZ.target" -fs 4242
For subdomains you can run the following command:
1
ffuf -w /path/to/dns/wordlist -u https://FUZZ.target
And for page or directory listing you can just run the following:
1
ffuf -w /path/to/dir/wordlist -u https://target/FUZZ
Gobuster
For the vhost scan you can fuzz like this (where 4242 is the size of the response for a bad `vhost):
1
gobuster vhost -w /path/to/vhost/wordlist -u https://target
For subdomains you can run the following command:
1
gobuster dns -w /path/to/dns/wordlist -d $DOMAIN
You may want to add the
-ioption so that you have the IP address of the found domains printed out.
And for page or directory listing you can just run the following:
1
gobuster dir -w /path/to/dir/wordlist -u https://target
More options
Ffuf
-fc, -fl, -fs, -fw: You can filter responses by a lot of things like status code (-fc), the amount of line in the response (-fl), the response size (-fs), the amount of words (-fw) and much more…
-recusion: This allows us to do recursive search on the found folders.
If you know that there will be a lot of folders, you can add
-recusion-depth XwhereXis the maximum depth whereFuffis going to check.
-e $A,$B,$C: This allows you to provide one or multiple extensions ($A,$B,$C looks like .htlm,.php,.txt)
-t 50: If you want to speed up your scanning, you can increase the number of threads.
Don’t put a number of thread too big. It may cause the site to crash, ban you or give you a lot of errors.
For more information about this tool, you can check codingo website.
Gobuster
-x $A,$B,$C: This allows you to provide one or multiple extensions ($A,$B,$C looks like htlm,php,txt)
Pay attention to the fact that in
ffufextension we use the notation-e .EXTENSIONbut ingobusterwe only provide the extension without the dot (-x EXTENSION).
-t 50: If you want to speed up your scanning, you can increase the number of threads.
Responder
Definition
Responder focuses on capturing authentication credentials from protocols such as SMB, HTTP or LDAP by spoofing domain names or listening for incoming domain name resolution with protocols like LLMNR (Link-Local Multicast Name Resolution), NBT-NS (NetBIOS Name Service), and mDNS (Multicast DNS).
Responder comes into two modes:
Listening/passivemode (-A, --analyzeflag):
→Responderwaits for “someone” (a computer not a user) to do a name resolution and responds as the host to be resolved asActivemode:
→Respondersets up aSMB,HTTPorWPADserver to get authentication of the user → Spoof protocols like LLMNR, NBT-NS, mDNS → Then, the client sends authentication credentials such as NTLM challenge via protocols likeSMBthat we then can crack offline or use them for relay
• LLMNR, NBT-NS, mDNS Poisoning: Spoof LLMNR, NBT-NS or mDNS protocols to asnwer to the victim and capture challenges (Net-NTLM hashes). • SMB and HTTP Server: Acts as a fake SMB or HTTP server to capture NTLM hashes.
• WPAD Rogue Proxy Server: Captures web proxy auto-discovery protocol requests and redirects them to capture Net-NTLM hash.
Default Use Case
The default use case is responder -I $INTERFACE where $INTERFACE is the interface linked to your network.
By default, Responder will start poisoning on the name resolution protocols LLMNR, NBT-NS, MDNS and DNS:
![[Tools_Resp_Poisoners.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_Resp_Poisoners.png)
And it opens several servers on your machine (SMB, HTTP, FTP…) for users to connect to:
![[Tools_Resp_Servers.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_Resp_Servers.png)
Once you launch it, wait for someone to connect to you and try to authenticate. You can capture NTLM hash and try to crack them with hashcat or relay them with ntlmrelayx:
![[Tools_Resp_Hash.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_Resp_Hash.png)
More options
You can append -wd to the initial command. the -w allows you to start the WPAD rogue proxy server and the -d allows you to answer for DHCP broadcast requests.
Note that the
-doption will inject aWPADserver in theDHCPresponse
ntlmrelayx
Definition
Ntlmrelayr is a tool that relays NTLM authentication requests to different services, allowing an attacker to gain unauthorized access through relay attacks. The targets we will be looking for are any server on a domain that doesn’t have the signing set to True (protocols not signed).
Default Use Case
Ntlmrelayx will relay received connections to the correct server allowing us to get authenticated connections. You can run the following command :
ntlmrelayx -tf targets.txt -of netntlm -smb2support -socks
-tf $TARGETS contains the list of the servers you want to relay to. -of $FILENAME will save the response that is based on the client hash so you can try to crack them later -smb2support allows us to support SMB v2 -socks starts a socks proxy to run commands via received authenticated connections. This will allow us to execute commands as a specific user without knowing his password
With those options enabled, we can see the incoming successful authentications :
![[Tools_ntlmrelayx_cmd.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_ntlmrelayx_cmd.png)
Those connections come from
Reponderthat send them tontlmrelayxfor relay
By running the command socks, we can see all spoofed users and if they are admin or not of the server:
ntlmrelayx will create a session for each relayed connection (as we can see above). Other tools using proxies like proxychains should check on local port 1080 (by default on ntlmrelayx) if a session is available for the specific IP and port. If so, it will try to connect to it but it may need admin rights depending on the action we want to realize (AdminStatus set to TRUE).
Because we use the option socks, we can now run commands with proxychains like a secretsdump with the admin user:
![[Tools_ntlmrelayx_secdump.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_ntlmrelayx_secdump.png)
You can use tools from
impacketusing the-no-passoption if you are using a relayed connection. If the option is not available, just specify an empty password and it should work the same. Only SMB and LDAP can be relayed.
Here, we used
ntlmrelayxwithresponderbut you can use it withcoercerto force the connection instead of waiting for it.
Tools like nxc (previously cme) don’t have the -no-pass option. Just specify an empty password (or whatever you want, it will be ignored in any case) and it should be OK:
![[Tools_ntlmrelayx_nxc.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_ntlmrelayx_nxc.png)
More options
-t $PROTOCOL://$SERVER if you want to relay to other protocols, you can specify it directly in the command line with this option.
It is also possible to prepend it in the file when we use the
-tfoption. We will have lines likeldap://192.168.56.12orsmb://192.168.56.12.
Note that when we used the tool to relay to
SMBwe could createsocksto have a stable connection. This is not the case when we relay toLDAP. You will have to use the tools option to specify what to do like--add-computer $COMPUTER_NAME.
-l $LOOT_DIR will dump the looted information in this $LOOT_DIR like SAM or LDAP information. By default, if a relayed connection allows an LDAP connection then ntlmrelayx will generate files based on the information found on the LDAP using ldapdomaindump (e.g. list of computer by OS, list of users with their description, list domain policies…)
lsassy
Definition
Lsassy is a post-exploitation tool designed to extract Windows credentials (LSASS dumps) remotely without touching disk, minimizing detection.
LSASS is a process containing the last logged on users (local or domain). It checks user logging in, handles password change and create access token.
The use of
credential guardallows to isolate the process to protect it. For additional protection you can check this post of Microsoft.
Lsassy will recover passwords remotely thanks to either ProcDump (Microsoft tool of the Sysinternals suite), or via the DLL of Windows comsvcs.dll that allows to dump process, but only if you are SYSTEM.
This tool was developed by pixis so you can check his blog on hackndo if you want more information about the tool.
Default Use Case
You can run lsassy when you have an admin user on a server:
lsassy -d $DOMAIN -u $USER -p $PASSWORD $SERVER_IP
or using nxc / cme:
nxc smb $SERVER_IP -d $DOMAIN -u $USER -p $PASSWORD -M lsassy
![[Tools_Lsassy.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_Lsassy.png)
More options
-m allows you to specify the dumping method like dumpertdll and many others. You’ll need to specify the path of the module (dll or binary) you are using with -O dumpertdll_path=$PATH_OF_THE_DLL to evade AV. You can find the code to compiler on the dumpert GitHub repository.
Secretsdump
Definition
Secretsdump is a tool to extract sensitive information like password hashes from remote systems without executing code on the target machine.
You will need admin rights to be able to run this script successfully
- It dumps
SAMandLSA(SECURITY) from Registery and save the result intempdirectory. - It also dumps
NTDS(only onDC). It gets the list of domain users, their hashs and Kerberos keys via[MS-DRDS] DRSGetNCChanges()or viavssadmin - After dumping everything, it cleans his traces by removing any created files.
Default Use Case
You can run Secretsdump with only a user and a password (or hash):
secretsdump $DOMAIN$/$USER:$PASSWORD@$SERVER_IP
![[Tools_Secretsdump.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_Secretsdump.png)
More options
-hashes you may want to use this flag if you don’t have the password of the admin and use its hash instead.
testssl.sh
Definition
testssl.sh is a tool that checks SSL/TLS configurations on servers to detect vulnerabilities or misconfigurations related to encryption and security protocols.
- First
testssl.shdoes aDNSlookup to find possibly multiple IP addresses. After that, it does areverse DNSlookup to find the corresponding domain from the IPs. Then, it will check that the found targets have indeedSSL/TLSon port443unless another one is pecified - It will check the
SSL/TLSprotocols. The service should not provide deprecated protocols likeSSLv2,SSLv3,TLS 1orTLS 1.1. OnlyTLS 2orTLS 3should be offered - Then it will check if any outdated ciphers, such as
RC4,DESor3DESare used and show stronger ciphers likeAESorAEADcipher. - It will also check the
Perfect Forward Secrecy(PFS). If this is not supported, this mean that if an attacker can find the private key of the server, then it will be able to decode every communications. But ifPFSis supported and the private key is found, this means that only the new messages can be read. Ciphers likeDHE(Diffie-Hellman Ephemeral) orECDHE(Elliptic Curve Diffie-Hellman Ephemeral) should be used. - Checks if the server has a server-side cipher preferences enable. This allows to not allow communicate with deprecated cipher mechanism hence allowing only secure communications.
- The
server defaultsection provide a lot of information about the certificate(s) like theissuer,validity,expiration… It will also check that the server use securesession managment(session ID or session tickets). - Then it will check the
HTTP headers. This includes headers likeStrict-Transport-Security(HSTS)X-Frame-OptionsandContent-Security-Policy(CSP), which help secure web applications against common attacks likeXSS,clickjackingandman-in-the-middleattacks. - Now it will check for
SSL/TLSvulnerabilities likeHeartbleed,POODLE,BEAST,DROWN,ROBOT,Logjam. - After that,
testssl.shwill check, based on a list of 370 pre-configured cipher, that the server supports strong and non-obsolete ones. - The last step will do a client simulation that mimic how different clients (web browsers, mobile devices, different OS…) react with the server. This allows to check that the server is not vulnerable to
downgradeattacks. - Finally,
testssl.shwill provide a grade to allow us to have a global idea of the certificate security level.
Certificate
To have a better understanding of what does testssl.sh, I’ll just explain a bit what’s in a certificate also known as X.509 certificate.
- The first part of the certificate when we open it is about the subject information.
Common nameof the website that the certificate is intended to protect (exemple.comor*.exemple.com) - Then we have information about the issuer (ex: Common name of the issuer)
- We then encounter information about the certificate’s
serial number(unique positive integer to identify the certificate), theX.509version (ex: 3), the signature algorithm (ex:SHA-256 with RSA) and thevalidity period. - The
public keyis part of a key pair that also includes aprivate key. Thepublic keyallows the client (our browser) to encrypt the communication with the server. Theprivate keyis kept by the server to sign documents. or decrypt data.
Signed documents with the
private keycan be verified by anyone who has thepublic key.
- The
digital signatureensureauthenticity(signature only generated by theCertificate Authority) andintegrity(if the certificate is tempered, then the digital signature won’t match). Thedigital signatureof an X.509 certificate is calculated by first creating a hash of the certificate’s data using a cryptographic hash function (likeSHA-256). This hash is then encrypted with theprivate keyof the certificate issuer (theCertificate Authority) to generate the signature, which can be verified by others using the issuer’s public key. - We can have
exensionsthat allows the owner of the certificate to have multiple domains bound to the same certificate.
Your browser may calculate a
fingerprintof the certificate to check its authenticity.
The chain of trust as its name state is a trust between multiple certificates. A Root CA at the top, which issues certificates to Intermediate CAs, and these Intermediate CAs in turn issue certificates to end entities, forming a hierarchical structure that establishes trust through verification of each certificate’s authenticity and validity up to the Root CA. We may have the following diagram:
![[Tools_testssl.png]](https://raw.githubusercontent.com/Nouman404/nouman404.github.io/main/_posts/Notes/photos/Tools_testssl.png)
The use of intermediate certificates allow better security. This allows
root CAnot to be exposed and so, to protect their private key by exposingintermediate CA. If anintermediate CAis compromised then it can just be revoked.
Default Use Case
You can use testssl.sh without specifying any flag and it will give you information about the certificate and possible vulnerabilities. You can use it like this:
testssl.sh $TARGET
More options
--file <fname> or the equivalent -iL <fname> are mass testing options. Allows to to test multiple domains.
--basicauth <user:pass> This can be set to provide HTTP basic authentication credentials which are used during checks for security headers. BASICAUTH is the ENV variable you can use instead.
--ip <ip> tests either the supplied IPv4 or IPv6 address instead of resolving host(s) in <URI>. IPv6 addresses need to be supplied in square brackets. --ip=one means: just test the first A record DNS returns (useful for multiple IPs). If -6 and --ip=one was supplied an AAAA record will be picked if available. The --ip option might be also useful if you want to resolve the supplied hostname to a different IP, similar as if you would edit /etc/hosts or /c/Windows/System32/drivers/etc/hosts. --ip=proxy tries a DNS resolution via proxy.
--wide mode expands the cipher suite testing to include a larger and more exhaustive set of cipher suites.
DonPAPI
Definition
DonPAPI is a post-exploitation tool used to extract credentials and sensitive data from Windows machines by targeting the Data Protection API (DPAPI). DPAPI allows the management of symmetric encryption of the secrets in a Windows environment. What we need to understand is that the process of storing secrets generate multiple files. It creates a blob in C:\Users\$USERNAME\AppData\Roaming\Microsoft\Credentials that contains raw bytes and the GUID of the master key file (located in C:\Users\$USERNAME\AppData\Roaming\Microsoft\Protect\$USER_SID). With those two files and the password of $USERNAME we can recover any “protected” password that used DPAPI
You can recover also
DPAPIcredentials from other users if you are a local or a domain administrator.
You can provide the
Net-NTLMhash instead of the password if you only have that.
Default Use Case
By default, donPAPIwill collect the following credentials:
Chromium: Chromium browser Credentials, Cookies and Chrome Refresh TokenCertificates: Windows CertificatesCredMan: Credential ManagerFirefox: Firefox browser Credentials and CookiesMobaXterm: Mobaxterm CredentialsMRemoteNg: MRemoteNg CredentialsRDCMan: RDC Manager CredentialsFiles: Files on Desktop and and Recent folderSCCM: SCCM CredentialsVaults: Vaults CredentialsVNC: VNC CredentialsWifi: Wifi Credentials
You can use it with only the username and his password:
DonPAPI collect -d $DOMAIN -u $USER -p $PASSWORD -t $TARGET_IP
More options
donpapi gui: allows to see the found credentials in a web GUI.
You can use the
--basic-auth $USER:$PASSWORDwith theguioption so that not everyone has access to it.
-H LMHASH:NTHASH allows you to provide hashes instead of passwords