Wednesday, March 27, 2024

O365 AlternateSignInName - When Is This Used?

I was working with some threat hunters from BlueVoyant who called out an oddity in our O365 SignInLogs. We had successful sign-ins from dozens of accounts where the AlternateSignInName field (Sign-in identifier field in the Azure AD sign-in log) contained a single account and that account was deleted months ago. Why would dozens of accounts have the same AlternateSignInName and how can that happen with an account that no longer exists? Do we have an incident? Yikes.

So what is this AlternateSignInName? How does it get populated? Is the account actually used for authentication? In short, what the heck is happening here?

Diving into Google results returned very little about how AlternateSignInName works. Microsoft has a reference for SignInLogs (https://learn.microsoft.com/en-us/azure/azure-monitor/reference/tables/signinlogs) but it clarifies little. "The identification that the user provided to sign in. It may be the userPrincipalName but it's also populated when a user signs in using other identifiers." I also noticed it's usually blank in our logs. So how and why does AlternateSignInName get populated? I was hoping someone might have posted a question about it on Reddit or StackOverflow but no dice.








Here's a sample from Azure AD showing a login for my neutered username and a non-existent account for the Sign-in identifier (AlternateSignInName when it goes to Sentinel/Splunk).









Eventually we figured it out, at least for my organization. When you use Azure AD (Entra?) for authentication and have apps that use single sign-on, when you hit the initial login page it just asks for an e-mail address. That e-mail address is only being used to determine where to redirect your authentication request to. Then you go to your organization's sign in page and login. So it actually doesn't matter what account you type in as long as the domain is correct. It's this initial e-mail address that's getting stored in AlternateSignInName in the logs. Note that it hasn't been used for authentication.












Then it redirects to my company's Azure AD and you type in your real username and password.













Now you've authenticated with your real username and password. But in the logs your initially entered e-mail address is forever immortalized (as long as the retention period) in the AlternateSignInName in the SignInLogs. The e-mail address can be real/fake/disabled/deleted because it's not actually used for authentication, just the redirect to the correct authentication endpoint.





But one more mystery. Why did dozens of our users have the same AlternateSignInName? 

In the cases I investigated, all the users signing in worked in the same department. We had a case where a shared computer was being used but an account had already been cached in the browser. So on the initial login page instead of an empty field to type in a new e-mail address users were seeing the page to the right. If you click the cached e-mail address, it takes you to the Azure AD login page. If you click "Use another account" then you have to type in your e-mail address, then get redirected to the Azure AD login page and type in your username/password. It's one less step to just click the cached entry and go straight to the Azure AD login page and that's what our efficient (lazy?) users were doing. In our case, the cached user had left the company months ago but the cached user still remained in the browser of the shared computer. 

Thankfully, this was not an incident!

Tuesday, January 4, 2022

Microsoft Sentinel Workbooks in UTC Time

When I started using Microsoft Sentinel, one of the glaring issues I ran into as an incident responder was that while my logs were showing in UTC (as I selected), the Workbooks (dashboards) I created would only show in local time. I checked settings, changed my Azure profile to UTC, no luck. Why can't I select the timezone for my dashboard?

I finally found the answer. You can't select the time zone as a user. You have to bake it into the dashboard and your choices are UTC or local time.

(EDIT: I just discovered that the time picker is still in Local Time and I don't see any way to change it.)

You can't make a blanket change across the entire Workbook either. You have to change the formatting of EVERY time column in EVERY query in your Workbook. (I'd really like to talk to Microsoft about their UI design, because it's killing incident responders like myself).

Here's how to do it:

1. Click the button to Edit your Workbook.

2. Click Edit on the query you want to set to UTC. 

3. Click on the Column Settings button. (FYI, Column Settings ONLY appears if your query is showing results ðŸ¤¬)

4. Click on the time column, set "Column renderer" to Date/Time.

    Click the checkbox for "Custom date formatting".

    Change the Date format style back to Short date time.

    Click the radio box under "Show time as" to UTC.


5. Do this for every single query in your Workbook.



Thursday, December 9, 2021

Install-Module Not Installing?

 

Are you trying to run Install-Module and while it looks like it's working, never actually installs the module? You might have a permissions problem.

PS C:\Users\user> install-module -Name AzureADpreview -Verbose
VERBOSE: Using the provider 'PowerShellGet' for searching packages.
VERBOSE: The -Repository parameter was not specified.  PowerShellGet will use all of the registered repositories.
VERBOSE: Getting the provider object for the PackageManagement Provider 'NuGet'.
VERBOSE: The specified Location is 'https://www.powershellgallery.com/api/v2' and PackageManagementProvider is 'NuGet'.
VERBOSE: Searching repository 'https://www.powershellgallery.com/api/v2/FindPackagesById()?id='AzureADpreview'' for ''.
VERBOSE: Total package yield:'1' for the specified package 'AzureADpreview'.
VERBOSE: Performing the operation "Install-Module" on target "Version '2.0.2.138' of module 'AzureADPreview'".
VERBOSE: The installation scope is specified to be 'AllUsers'.
VERBOSE: The specified module will be installed in 'C:\Program Files\WindowsPowerShell\Modules'.
VERBOSE: The specified Location is 'NuGet' and PackageManagementProvider is 'NuGet'.
VERBOSE: Downloading module 'AzureADPreview' with version '2.0.2.138' from the repository
'https://www.powershellgallery.com/api/v2'.
VERBOSE: Searching repository 'https://www.powershellgallery.com/api/v2/FindPackagesById()?id='AzureADPreview'' for ''.
VERBOSE: InstallPackage' - name='AzureADPreview',
version='2.0.2.138',destination='C:\Users\user\AppData\Local\Temp\924842467'
VERBOSE: DownloadPackage' - name='AzureADPreview',
version='2.0.2.138',destination='C:\Users\user\AppData\Local\Temp\924842467\AzureADPreview\AzureADPreview.nupkg',
uri='https://www.powershellgallery.com/api/v2/package/AzureADPreview/2.0.2.138'
VERBOSE: Downloading 'https://www.powershellgallery.com/api/v2/package/AzureADPreview/2.0.2.138'.
VERBOSE: Completed downloading 'https://www.powershellgallery.com/api/v2/package/AzureADPreview/2.0.2.138'.
VERBOSE: Completed downloading 'AzureADPreview'.
VERBOSE: Hash for package 'AzureADPreview' does not match hash provided from the server.
VERBOSE: InstallPackageLocal' - name='AzureADPreview',
version='2.0.2.138',destination='C:\Users\user\AppData\Local\Temp\924842467'
VERBOSE: Catalog file 'AzureADPreview.cat' is not found in the contents of the module 'AzureADPreview' being installed.
VERBOSE: Valid authenticode signature found in the file 'AzureADPreview.psd1' for the module 'AzureADPreview'.

In this case, I don't have local administrator rights on my machine but I have the ability to run Powershell as administrator and elevate as needed (through a third party app). 

I mistakenly assumed that when you run Powershell as Administrator that all the commands are run with Administrator privileges. Turns out that isn't true. And that's why the install isn't working.

Here's the workaround. Open a Run window, enter powershell.exe Install-Module -Name AzureADPreview. To run the command as Administrator, hit Ctrl+Shift+Enter. This will run the Install-Module command with the privileges of Administrator and it will actually install. 



Verify by checking the location of installed Powershell modules on your machine. You should see the installed module (AzureADPreview in my example) Unfortunately there's a few locations to check depending on your configuration.
  • C:\Program Files\WindowsPowerShell\Modules
  • C:\Windows\System32\WindowsPowerShell\v1.0\Modules
  • C:\Windows\SysWOW64\WindowsPowerShell\v1.0\Modules
  • C:\Users\user\Documents\WindowsPowerShell


I'm posting this because I spent some time banging my head against the wall without finding answers on Google. Hopefully this helps someone else.

Monday, June 14, 2021

List-ChromeExtensions.ps1 and List-ChromeExtensions.py

When I began investigating malicious Chrome extensions, the initial hurdle was what do these 32 letter codes mean? And how can I do this work without constantly looking them up on Google?










Thus List-ChromeExtensions was born to help with these investigations. I wrote a Powershell script for Windows and Python 2.7 for Mac. The script uses a combination of the file system, each extension's manifest.json and the Chrome Web Store to identify the name, creation time, whether it's currently in the Chrome Web Store, the description, version, user, Chrome profile, and computer. Additionally there's a parameter to pull the permissions from the manifest.json. 

Options include:

  • showdefaults - Default extensions are generally not malicious, so they are not displayed by default.
  • showpermissions - Lists the permissions section from the manifest.json. 
  • output - Powershell outputs in JSON or table. Python only outputs in JSON. 

List-ChromeExtensions.ps1

Optional Parameters (default):

-showdefaults ($false)/$true
-showpermissions ($false)/$true (recommended with -output json for readability)
-output (table)/json


List-ChromeExtensions.py

Optional Parameters (default):

--showdefaults (False)/True
--showpermissions (False)/True

Output Attributes

  • CreationTimeUTC - The folder creation time from the file system for the specific extension. This is the install time.
  • Name - The title of the extension.
  • Description - The description provided in the manifest.json if it exists.
  • Chrome_Store - Lists whether the extension is in the Chrome Web Store or is an extension installed by default.
  • Version - The version provided in the manifest.json.
  • Code - The 32 letter code for the extension as seen in the extension folder.
  • User - The user with the extension installed.
  • Profile - The Chrome profile where the extension is installed. Typically is Default, but if more than one Chrome profile exists it will show Profile 1, Profile 2, Profile 3, etc.
  • Computer - The Computer name. Helpful if you're aggregating results or storing data in a SOAR or ticketing platform.
  • Permissions (optional) - The permissions listed in the manifest.json. This is what the extension is allowed to access. This is helpful when looking for potentially malicious extensions that have more permissions than they should reasonably need.


Download



Friday, June 11, 2021

The Basics On Malicious Browser Extensions

In recent years browser extensions have become another unheralded avenue for attackers to steal passwords, exfiltrate data, identify users, and generate advertising revenue. It has been used by both nation-states and e-crime. The reason you should care is because there's almost no coverage from security tools and few enterprise controls. It's the malicious code in your enterprise that any user can download and that no one is looking for.

Chrome, Firefox, and Edge each have their own ecosystems for browser extensions and there's malware in all of them. But Chrome is arguably the most common ecosystem and a number of Chromium based browsers (Brave, Opera, Vivaldi) also support Chrome extensions. I've been shocked by the sheer number of malicious extensions that were identified and removed from the Chrome Web Store but are still running on end user systems. Google can update your Chrome extensions automatically, but for some reason isn't removing malicious extensions even when they've been removed from the Chrome Web Store.

A lot of research in this space already exists on the Internet and I'll link to it accordingly.

Typical Scenarios For Malicious Extensions

  1. An author for a legitimate extension gets a buyout offer and sells the extension. The buyer is actually an attacker who wants to quickly infect a million users. The attacker creates a new update with malicious code, Chrome updates the extension automatically on end user systems, and now a million users who had a legitimate extension now get updated with malware. See Better History and The Great Suspender for examples. We have also seen this happen through account takeovers where the extension author is locked out of their account and an attacker adds a malicious update.

  2. User is tricked into installing a malicious extension through phishing or malvertisement. A Russian APT used this technique in 2015 and a North Korean APT did this in 2018. But there's plenty of eCrime campaigns using this for adware and monetization.

  3. User downloads an extension that appears to be useful but is trojanized with malware/adware. Some legitimate extensions have evil doppelgangers with malware added.

 

The Dangers

 What is the potential for damage? More than you might expect.


How To Identify and Stop Malicious Extensions In Your Enterprise

Enterprise controls around browser extensions are severely lacking. Very few security tools even cover this space. Malwarebytes is one of the few that identifies malicious extensions on scans. What are our options?

Detection Via Domain Callbacks

Most malicious browser extensions need to reach out to the Internet to exfiltrate data, download the next stage of malicious code or to get the next advertisement. Detecting these domains in your DNS/proxy logs is a great way to detect malicious extensions in your environment.

Malicious browser extensions seem to come in campaigns and related malicious extensions tend to call out to the same domains/networks. It's the method I've been using to identify clusters/campaigns of extensions.  

Where do we get these malicious domains? Again, open source intelligence is your friend. Reports on newly found malicious extensions often include these domains. Threat intelligence firms don't seem to specifically cover this space either. The proxy service I use does identify a lot of these domains as malicious but doesn't directly relate them to extensions. Another option is once you begin finding and analyzing malicious extensions there's often related domains in the code (sometimes they're heavily obfuscated, sometimes not) and you can begin adding your own research.

If you have EDR and an enterprise proxy, try looking for systems with repeated blocks to domains where the DNS request is from Chrome. Sometimes it's just web browsing, but when a user is hitting the same blocked site regularly over time it's worth an investigation.

Group Policy 

Group Policy can be used for whitelisting or blacklisting extensions.

Whitelisting: Blocking all extensions except for ones the company approves is the most secure way to handle this problem, but it will also cause the Help Desk a lot of grief as users start making complaints and requests to whitelist extensions. It's also possible to install copies of the browser that don't adhere to the group policy. Users love to get around security controls. 

Blacklisting: Blocking known bad extensions is a low impact way to stop these attacks but leaves you open to the next malicious extension. I haven't found any threat intelligence firms that cover browser extensions but there's a vast amount of open source research available.

Custom Tools

When I said enterprise controls around browser extensions were lacking, I wasn't kidding. I've had to build my own tools to List/Disable/Download extensions. I'll be releasing some of these on Github.

Removing Chrome extensions remotely is a tricky business. If you delete the extension folder from the host and the extension still exists in the Chrome Web Store (Google isn't always receptive to these reports) it can be re-downloaded.


Challenges

Identifying Extensions

Unless you're in the browser's UI, extensions are not easily identifiable. Chrome extensions use a 32 character code to identify the extension.

You probably don't have these memorized. But if the Chrome extension is in the Chrome Web Store, you can find out what extension it is by going to https://chrome.google.com/webstore/detail/<extension code> or the title is sometimes in the manifest.json (mostly not, though). First thing I did was build a script to identify extensions using these methods.

If the title is not in the Chrome Web Store or manifest.json file, and it's not one of the default extensions installed with Chrome, identifying it can be tricky. Try Google or a service like Crxcavator.io or Crx4Chrome.

Internet Edge extensions appear to be similarly identifiable by going to Microsoft's extensions site: https://microsoftedge.microsoft.com/addons/detail/<extension code>

For Firefox, identification isn't quite so easy. The extensions are zipped into .xpi files and I haven't been able to easily tie the extension code back to the Firefox Browser Add-Ons page.

 

Enterprise Wide Identification

It would be nice if there was a tool that allowed us to see all installed browser extensions across the environment. This doesn't currently exist in my experience. My current thought would be to run my ListChromeExtensions script across all hosts in the enterprise and dump the output to a database/logging platform on a regular basis. It wouldn't give an up to the minute account of all installed extensions, but it would be an improvement over what currently exists.

 

How Do We Determine If An Extension Is Malicious?

This is the $25 million dollar question. It's not a simple problem. Attackers insert their own functions into existing libraries (like jQuery) or insert additional malicious code or functions into otherwise legitimate javascript. More recently they're downloading code from a C2 server, so the malicious code is never even on the file system.

Analyze The Code

Put your javascript hat on. Analyzing the code in Chrome extensions is not a simple task nor is it a complete one. But it's the best way to find unknown malicious extensions.

In 2018 Google announced it would no longer allow extensions with obfuscated code. But obfuscated is definitely still getting through.

Quick things to look for that increase suspicion:

  • Sometimes the code includes the callback domains in plaintext.
  • Look for code obfuscation in automatically called scripts. If the code is hiding something, it's a red flag.
  • Look for unnecessary permissions in the manifest.json
  • Look for unsafe eval in content_security_policy in the manifest.json

Code analysis may be the only way to accurately determine if an extension is malicious but it's as slow as reversing malware. It's also useful if you can diff previous versions of the code to find the code changes (Previous versions are sometimes available at Crx4Chrome).

Check Reviews

Often the reviews in the Chrome Web Store are indicative of bad behavior. If people are encountering pop ups or malware, they often post about it. But the bad guys are also posting fake reviews and Krebs wrote a recent article about it. This seems like a good avenue for future research. Find the fake reviews, find the malicious extensions.

Check the Extension On Crxcavator.io

Crxcavator scans extensions currently in the web store and reports risk levels. It's a useful tool for analysis but doesn't indicate whether an extension is actually malicious. Another tool in the toolbox.


Chrome Sync

Do you allow users in your environment to use Chrome sync? If so, they're copying all of their settings including malicious Chrome extensions and malicious Chrome notifications into your enterprise. Not to mention it's a data exfiltration path you probably haven't considered.

I haven't found a lot of research on Chrome Sync but I've definitely found malicious extensions synched and re-downloaded by a user after I removed them. I've also seen Chrome Sync trying to download Chrome extensions from malicious web sites (this was possible before 2015).


In Closing

Browser extensions are the wild west of malware at the moment. There's little visibility, almost no enterprise security tools to help, and it's as effective as a supply chain attack at reaching large audiences. Expect more attacks.

I anticipate the need to write more on this topic. I think it's worth sharing some of the tools I've developed to help identify extensions.



Other Interesting Reports

Awake Security: Malicious Domain Registrars

Duo Report on Malvertising Campaign



Tuesday, February 6, 2018

Vendors, Give Me My Data!

Although we're not in the Teutoburg Forest, as an Incident Responder, I live in a vortex of swirling data. My cases live and die based on the data we've collected. There's nothing more frustrating than asking a question, going to the logs, and finding out that the data I need to answer my question doesn't exist. It happens more than I like to tell management. And it's not that the data wasn't collected, it's that the vendor doesn't supply it so I can't look at it in my SIEM. Let me list some offenders I've encountered over the years and why the problem is so galling.

We had a spam service that re-writes URLs in e-mails in order to record clicks so that when they discover a phishing campaign after the fact, they can send us warnings about who clicked on the phishing link. The problem? When they miss a phishing campaign and we discover it manually, we have no way to access that click data. When we suspect credential theft has occurred, we need to know who clicked the link immediately. Our only recourse is going through support, which takes hours to days. Meanwhile, the attacker is happily logging into our systems. Not okay. If the data was in our SIEM, the incident would be contained within the hour.

While demoing antivirus products we spoke to a few vendors who literally asked "Why do you need the logs? We protect you." Why? Because an analyst needs to look at them. Trust but verify. When you catch stage 2 of a malware infection in the Windows directory, it means you missed stage 1 (this happens all the time). This is obvious if you're looking at the logs and we need to catch it when you miss it. Also, can you please give us the hash of the detected file? That way our analysts can look it up in our intel platform and get an understanding of what we're dealing with. Is it PUP or a RAT? Is it commodity malware, something unique, or is it a false positive for a legitimate homebrew app? Bottom line, if you can't send logs to a SIEM automatically and your response is "you can manually download a .csv from the console" you're just not ready for the enterprise space.

I love proxy logs. They're a treasure trove of data and I've done some very successful hunting looking for C2 traffic. Please, for the love of malware, include as much data from HTTP request headers as possible in your logs. Malware writers make mistakes when they craft these requests all the time. But I can't detect them if you don't give me the data. A vendor I encountered recently was trying to be helpful by analyzing the user agent strings in web traffic and returning the application the request came from. While I like that, what I didn't like was the vendor not including the raw user agent string in the log data. Why is it a problem? We could no longer detect malware using known bad user agent strings. For example, njRAT puts C2 data in the user agent string so its obvious if you're looking for it. Even if the proxy is doing its job and blocking this traffic, you still need to know about the infected computer so you can remediate it.

/rant over

Salesmen of security tools exude confidence. Their tool is going to save the world. Set it and forget it! If we buy their tool, we're safe. Period. That's what they tell us. The reality is that all security controls fail. No tool is perfect. And that's okay! Any security analyst worth their salt knows we're in a game of cat and mouse and eventually an attacker will gain the upper hand. VENDORS, even if your tool fails you can still help us! Give us the raw data! We can manually piece together what happened, what was missed, and send you back the results so you can improve your product. We win, you win. This is the kind of relationship you want between a customer and a vendor, a partnership, one that leads to better defenses for everybody. All we need is the data.

Saturday, September 2, 2017

What A USB .lnk Worm (Jenxcus) Looks Like To The End User

USB worms are one of those infections that should be long dead except that users keep them alive. All it takes is an unprotected computer at home and a user who decides to use their USB stick on their home computer as well as at work.

I've come across a lot of USB worms involving shortcut files in my work experience and I've found that many SOC analysts have a hard time understanding what's going on because they've never actually seen what's happening from the user's point of view. I've scoured the web for examples that demonstrate this clearly but haven't found anything satisfactory as a teaching aide.

In this example I'll be showing a Jenxcus .vbs infection that has taken over a USB stick.

The user sits down at their computer, plugs in their infected USB stick and wants to open a file called Introducing-NMAP.pdf. The trained user might notice that all of their files are shortcuts for some reason, but most do not. All of these shortcuts (.lnk) are malicious and will infect the machine with Jenxcus.


So the user clicks Introducing-Nmap thinking they're opening their PDF file. What are they actually opening?
The command line run is actually:
C:\Windows\system32\cmd.exe /c start aibdrozrug..vbs&start Introducing-Nmap.pdf&exit






This runs the malicious VB script aibdrozrug..vbs and then opens Introducing-Nmap.pdf so the user is completely unaware that they've just infected their computer. To them it looks like their PDF opened normally, albeit after a small delay.

But where are the real files? And where is the malicious aibdrozrug..vbs file? They're hidden files. Why don't we see them? Because by default Windows hides extensions for known files, as well as hidden and system files/folders.

When we uncheck these boxes, we can clearly see the malware on the USB stick. Notice how it creates shortcuts for every file and folder. All of them open the malicious .vbs script and then open their file/folder as expected.


























Therefore if you're an analyst and you can see the command line data, look for &, start, exit, and explorer. The existence of these doesn't necessarily mean it's malware, but if you also see a suspicious looking script it's probably evil. Also, the parent process of the malicious cmd.exe will be explorer.exe because the user opened this shortcut from Windows Explorer. A persistence mechanism will usually be created and the malicious .vbs will be copied to the C:\ so the machine stays infected.

Here's some example shortcuts from this infection.
Opening a File:
C:\Windows\system32\cmd.exe /c start aibdrozrug..vbs&start Introducing-Nmap.pdf&exit
C:\Windows\system32\cmd.exe /c start aibdrozrug..vbs&start androidscareware.txt&exit
C:\Windows\system32\cmd.exe /c start aibdrozrug..vbs&start 20160112_205327.jpg&exit
Opening a Folder:
C:\Windows\system32\cmd.exe /c start aibdrozrug..vbs&start explorer Jenxcus&exit

Here's what the Process Tree looks like for the Introducing-Nmap shortcut:

There's also malware that uses this same shortcut technique but kicks off rundll32.exe with a malicious .dll file on the USB stick. I've seen cases where the .dll filename and extension are both gibberish so it's not obvious that it's a .dll file. (Andromeda malware)

There's no vulnerability involved in this method other than tricking the user. This infection works because the Windows default setting hides known file extensions and hidden files/folders. The user has no reason to think anything is wrong.

I'm not going to dig into the .vbs file itself as it is beyond the scope of what I wanted to show here. But VirusTotal coverage is only (38/57) at the time of writing and this malware is from 2013!