Quick Tip: Updating your ESXi Server

[Last Reviewed: 2021-05-02]

Are you attempting to update your ESXi server and you’re getting the infamous [Errno 28] No space left on device error? Did you already enable the swap and you’re still getting that error? You might need to download a VMware Tools VIB before starting your upgrade:

Step 1: Make a note of the file that is displayed in the error. In this case, the file is VMware_locker_tools-light_11.2.5.17337674-17700514

Step 2: Download and install that file using the following commands:

cd /tmp
wget http://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/<FILE_NAME_IN_ERROR_MESSAGE>.vib
esxcli software vib install -f -v /tmp/<DOWNLOADED_FILE>
rm /tmp/<DOWNLOADED_FILE>


Step 3: Re-run your update! If you’re not sure what’s available, check out this handy Patch History site.

Good luck, and happy updating!

Configuration Manager Security and You

[Last Reviewed: 2019-06-04]

At MMSMOA 2019, I enjoyed a great presentation by Tom Degreef and Kim Oppalfens about Configuration Manager security. It was an amazing session, and I want to share some of the learnings I took away from it.

But first, a heart-to-heart

Extend the peace branch to your network and security teams.

I’ve noticed at conferences and elsewhere–you’ll hear things like “who needs network admins or sysops, we just need sysadmins.” Perhaps this started as a lighthearted joke, but it’s grown into a battlecry for some. This is unfortunate. At the end of the day, we’re all working under the same organization with the same business goals (and all have a common enemy–those who wish to use our network for evil).

I get it, it’s really easy to think that other teams in your organization are there just to make your life miserable. The fact is these teams (including yours!) are doing what they’re required to do for the business, and they’re doing their best with the information they’ve got. Not every team speaks in application deployments or CMTrace-friendly logs. Other teams see ConfigMgr as bandwidth, as NTLM handshakes, as port 10123 traffic, or disk storage.

This is where you come in.

Add value to their perspective. Get them involved. Show them that there’s so much more to the ConfigMgr space. It will be a learning experience for everyone, and you might just start to see things from their perspective as well.

A slightly different outlook

ConfigMgr is a Tier 0 application sitting in your environment, holding all sorts of data and all sorts of control over your domain. Look at the power this product wields from a single console:

  • Deploy applications or packages to any machine, including domain controllers
  • Deploy policies (configuration baselines) to any machine, including domain controllers
  • Run Scripts against any machine, including domain controllers

You’re probably getting the idea here, but it’s not just ConfigMgr pushing changes out. It’s also ingesting data. Does your database contain LAPS passwords? Patch compliance? Members of the local administrators group? This is useful information for you, but it’s a veritable gold mine for a security researcher who’s poking around for information on your network.

This is where the perspective change comes in. You use ConfigMgr because it’s an incredibly powerful tool for good. A researcher may use ConfigMgr because it’s an incredibly powerful tool for…something that probably doesn’t align well with the business.

Let’s take a look at a few different ways we can harden our environment.

NOTE: Some of these steps may look a bit “tin-foil-hat”, but it’s all about determining your business’ level of acceptable risk.

Content Source

Scenario

If you weren’t familiar with how ConfigMgr worked and spent time looking from the outside in, you’d probably notice the folder containing all your application, package, and script sources. If these files aren’t properly secured, a researcher could modify an installer (or modify a script that wraps around an installer–would you notice an extra line or two in a PSADT script?) These applications bypass UAC, and could even bypass Windows Defender Application Control.

If your source directory is read-only, there still could be information of value. Do your scripts have credentials or other secrets in them? Are there license files?

Mitigation Steps

To help keep your source content safe:
✅ Determine who is an admin of the server (and make sure they’re not using weak credentials). Content Managers do not need admin rights to the server.
✅ Consider setting NTFS permissions on the content source directory to only allow creation of new folders and content (and assign these to only those who need it).
✅ Consider setting separate NTFS permissions to allow updating content, but only grant these permissions when required.
✅ Consider using a PAWS (or a well-protected device) when updating content.
✅ Consider validating the content of your source files with a directory hash before updating.

ConfigMgr Server

Scenario

If you weren’t familiar with ConfigMgr, but wanted to gain access to it, you’d probably look at the server itself and see how it was configured. Who’s an administrator of this machine? Are they using weak credentials? Is Windows Firewall on and configured properly?

Mitigation Steps

To help keep your server safe:
✅ Determine who is an admin of the server (and make sure they’re not using weak credentials).
✅ Patch the server regularly!
✅ Validate that the server’s physical access is protected (and if it’s a virtual machine, validate that the host’s physical access is protected).
✅ Require MFA for login sessions (for interactive login or via RDP)
✅ Require MFA for the SMS Provider
✅ Limit the number of Full Administrators in ConfigMgr (to zero, if you can–use an empty group)
✅ Avoid adding individual users to RBA (Role-Based Access) who are already covered by a group membership

SQL Server

Scenario

If you are a local administrator of a SQL Server, you can become a SQL administrator with not much effort.
If you are a SQL administrator, you can become a ConfigMgr admin with, again, not much effort.

Mitigation Steps

To help keep your SQL server safe:
✅ Only open SQL firewall ports for roles that require direct SQL access
✅ Consider using a Group Managed Service Account

Status Filter Rules

Scenario

Status Filter Rules are great. You can trigger actions based on a state message ID. You can find many great scripts online, but be mindful where you download these. Status filter rules run with the highest privileges.

Mitigation Steps

To help keep your environment safe from rogue status filter rules:
✅ Sign your PowerShell scripts
✅ Validate NTFS permissions where your scripts are stored
✅ Consider using filters that are as strict as possible

Client Installation

Scenario

When a client is not yet managed by ConfigMgr, there’s the potential to trick a client into talking to a different Management Point (or even worse, execute a pass-the-hash attack to capture information about the client push account). Client push has a large attack surface: it requires a lot of dependencies to work, and it can be easily exploitable if Kerberos isn’t enforced.

Consider using a PKI infrastructure. This is not very popular, but has security benefits. Otherwise, consider Enhanced HTTP–you may find that it takes significantly less effort to enable this.

Mitigation Steps

To help keep your client installation safe:
✅ Don’t use client push
✅ If you must use client push, enforce Kerberos mutual authentication. Introduced in 1806, this prevents client push from using NTLM

Network Access Account

Scenario

The Network Access Account (NAA) is evil. This account is used by clients when they can’t use their local computer account to access content on distribution points (think OSD). It’s credentials are encrypted in a policy that is deployed to all machines, but the local ConfigMgr agent is able to decrypt this password.

Mitigation Steps

To help protect your NAA:
✅ Don’t use one! By enabling HTTPS/Enhanced HTTP and Token-based authentication, you can eliminate the need for the NAA
✅ If you must use a NAA, do not over-privilege the account.

Trusted Root Key

Scenario

ConfigMgr site servers exchange keys to communicate with each other. The top-level site in the hierarchy is the trusted root key–it’s function resembles that of a root certificate in a PKI.
Clients automatically retrieve this key from client push, or from Active Directory (when AD has been extended with the ConfigMgr schema). If a client cannot retrieve the trusted root key using one of these mechanisms, they could be misdirected to a rogue management point.

Mitigation Steps

To help protect your unassigned clients:
✅ Validate that ConfigMgr has published it’s site to any AD domain it’s managing
✅ If this cannot be done, manually deploy the key.

OSD/PXE

Scenario

OSD/PXE can be an interesting target. It’s during this time that machines are onboarded and built from scratch. If a researcher is able to open a command prompt during OSD, all sorts of information could be extracted–and any commands would be run with elevated permissions.

Mitigation Steps

To help protect your OSD environment:
✅ Use a client authentication certificate when creating boot media
✅ Disable command line support in boot images
✅ Be mindful of Runas accounts in task sequences, and validate that they are not over-privileged.
✅ Use HTTPS or Enhanced HTTP!

Local Machine Data (Data Leakage)

Scenario

ConfigMgr admins love logs–we are spoiled with lots of them. These logs, as well as other local ConfigMgr content, could be interesting to a security researcher.

Mitigation Steps

To help protect your local machine data:
✅ Be mindful of the logs that are left behind after OSD
✅ Be mindful of other ConfigMgr logs that may include interesting information (like execmgr.log)
✅ Are you storing sensitive information in WMI?
✅ CCMCache files are readable by regular users!
✅ Search through your logs or cache–do you see keywords like “-UserName“, “-Password“, or “-Key“?

Application Management

Scenario

Most application deployments are running with the highest privileges, which could make them a target. This risk is significantly higher if an application is interactive.

The Application Catalog roles are an IIS attack surface–they are no longer required if you are using the new Software Center.

Mitigation Steps

To help protect your deployments:
✅ Use the new Software Center and remove the application catalog roles
✅ Consider preventing user interaction with elevated processes
✅ Always download and execute applications–this ensures nothing is tampered with in-transit

Software Updates

Scenario

Much like the content source directory, the software updates directory could be tampered with–meaning incorrect content could be deployed to your machines under the guise of keeping them updated. Keep these locations secure!

Mitigation Steps

To help protect your software updates:
✅ Validate ACLs of the Software Updates download location
✅ Validate ACLs of the Software Updates package folders
✅ Protect these paths with SMB signing or IPSEC
✅ Consider using a dedicated WSUS website

Configuration Items

Scenario

CI’s are powerful, but a carefully-crafted CI could be used to execute code in your environment. You can find great CI’s online, but be careful where you download these.

Mitigation Steps

✅ Be mindful of importing CI’s from unknown sources

This is only the beginning

Your ConfigMgr environment is powerful. The product is outstanding and gets better with every release. More and more organizations are using ConfigMgr to manage their environments. As it becomes more popular and more powerful, it will continue to become increasingly interesting (and eventually become a common target).

As you continue to manage your environment, be mindful of the changes you implement. Stop and think for a moment–think like a security researcher–could the action you’re about to perform undermine existing controls in your environment (such as not signing a PowerShell script)? Is there an existing site configuration that could be used to exploit your environment (such as using client push)?

Remember: Security is everyone’s responsibility. Some day, your environment will thank you for it.

Quick Tip: Adding drivers using PnPutil

[Last Reviewed: 2019-04-24]

If you’re like me, one the first things you do when you get a new device is wipe and reload it. OEMs have gotten better about the amount of value added softwarethey preload on machines these days, but I still prefer starting fresh and building on top of that.

If you happen to purchase a device that also has a driver package available, we can use this to our advantage! Here’s how:

Step 1: Download the driver package

Dell

Download your driver CAB from Driver Packs for Enterprise Client OS Deployment

Extract your CAB using expand.exe driverpack.cab -F:* .\destination-path

HP

Download and extract your driver package from Driver Packs – HP Client Management Solutions

Step 2: Install the drivers

PnPutil is a built-in application to add drivers to the currently-running OS. Previously one could use pnputil.exe in conjunction with forfiles.exe to install drivers recursively, but now this functionality is included with the /subdirs switch! Run the following at the root level of your extracted driver package:

pnputil.exe /add-driver *.inf /subdirs /install

PnPutil will find and install all the drivers it can to the OS that’s currently running. Reboot when it’s done and you’ll be ready to proceed with awesomeness!

Code Signing: Proving Your Enterprise Code Is Yours

[Last Reviewed: 2019-04-25]

PowerShell scripts, ClickOnce VSTO applications, .NET applications, even Java Deployment Rulesets. What do they all have in common?

You can sign them with a code signing certificate!

What’s Code Signing?

When you digitally sign an executable or script, you’re guaranteeing that the code hasn’t been altered or corrupted since it was signed. The same signing certificate can also be used to prove your identity as a trusted publisher, meaning your end users can run your code confidently knowing it’s genuine. (No pressure!)

If you’re looking to sign code that will be distributed to a worldwide audience (or at least one that isn’t a part of your network/enterprise), it’s best to purchase a code signing certificate from a certificate authority.

You can also use self-signed certificates, but this method doesn’t scale very well outside of a small development environment.

If you’re only going to distribute your code internally to your enterprise, you can use something that’s probably already running in your network: Active Directory Certificate Services!

Code signing? With my Enterprise PKI? It’s more likely than you think.

Let’s step through the process of getting a Code Signing certificate template available for use, then request a certificate!

Step 0: Preparing your environment

There is one prerequisite step (besides having an enterprise PKI set up!), and that’s creating an AD security group. It may be your corporate policy that only specific individuals can sign code or scripts, so creating this group lets us scope who can do this. If you don’t have such a policy, it’s still a good idea to create a security group for this. For this example, I’m creating a group called Code Signers.

I’m adding myself to this group. Not only will this let me test that everything’s working, it will also let me sign my code!

Step 1: Make the Code Signing certificate template available

Open a Certification Authority snap-in connected to your issuing certificate server. Right-click on Certificate Templates and select Manage. The Certificate Templates Console will appear.

Right-click on the Code Signing certificate template. Click the Security tab and add the Code Signers security group we created in Step 0, grant Enroll permissions, then click OK.

Note: If you want to change any of the values in this certificate, you’ll need to close the properties window, right-click on the certificate template, and select Duplicate Certificate. Here you can customize things like the validity period (which is 1 year by default). (You’ll still want to add your security group to this new template with Enroll permissions!)

Once your template is ready, close the Certificate Templates Console. Back in the Certification Authority snap-in, right-click on Certificate Templates > New > Certificate Template to Issue.

In the Enable Certificate Templates window that appears, select your certificate template and click OK. If you duplicated the certificate, look for the certificate under the name you selected. The intended purpose should say Code Signingregardless.

Note: If your certificate template doesn’t appear in the list right away, you may need to wait a bit. The template must replicate across your domain. Perhaps use this time to learn about AD change notification. 🙂

Here I called my certificate “Enterprise Code Signing” so I could modify the validity period.
Your new Code Signing certificate is now ready! Let’s request one!

Step 2: Request a Code Signing certificate

On my local machine, signed in as myself (or a member of our Code Signing security group), I open mmc.exe and add the Certificates snap-in (File > Add/Remove Snap-in > Certificates > Add > OK).

Note: if asked if you are prompted to pick from a Computer account, Service account, or a User account, select User account.

In the snap-in, right-click on Personal and select All Tasks > Request New Certificate…

The Certificate Enrollment dialog appears. Click Next, then select Active Directory Enrollment Policy and click Next. Locate your code signing certificate in the listing. From here you can either click Enroll to request the certificate, or you can click Details > Properties to modify, say, making the key exportable (so you can use the same key on multiple machines). This can be changed under Private Key > Key options > Make private key exportable.

Huzzah! After completing the certificate enrollment, a new certificate will appear under Personal > Certificates. It will be issued to your name, and it will show Code Signing under Intended Purposes.

With your new code signing certificate, you could do something awesome like sign that great PowerShell script you just finished writing!

Code Signing PowerShell in the Enterprise

[Last Reviewed: 2019-04-25]

Here’s a scenario: you just finished writing the world’s most amazing PowerShell script, and you want to deploy it to a collection of workstations in your enterprise. You run it, just to see the wall of red text telling you that unsigned scripts are not allowed.

What if we signed that script?

Not only would that let your code run on machines without changing the execution policy, it would also serve as another layer of authentication–proof that the script that’s about to run is created by you, unaltered, and not corrupt.

The good news is it’s not a lot of effort to sign a script. If you’ve got a code signing certificate from your enterprise PKI (or from a public CA), you’re just a couple PowerShell commands away from gaining these benefits!

Using PowerShell to sign PowerShell

Step 0: Preparing your environment

Before we sign your code, we’re going to need some code to sign! If you don’t have a script handy, feel free to use this one-liner. Save it to Get-Truth.ps1.

Write-Output "$env:USERNAME is amazing!"

Step 1: Signing the Script

To sign your script, we’re going to use Set-AuthenticodeSignature. To run this cmdlet, we’re going to need three things:

  • The path to your script
  • Your code signing certificate
  • A timestamp server*


*Actually, we don’t need the timestamp server. You can sign your code just fine without it, and things will work great–at first. Once your code signing certificate expires, one of two things will happen:

  • If you included a timestamp server: your script will continue to work. The timestamp server tells the system that the code signing certificate was good at the time of signing, so your script will run fine even after the certificate expires.
  • If you didn’t include a timestamp server: your script will no longer work (users will be prompted with an error that the certificate isn’t digitally signed). A timestamp server wasn’t able to confirm that the certificate was good at the time of signing, so the system assumes the worst (which it should)

There are many timestamp servers out there–use one that meets your requirements or strikes your fancy.

To get your code signing certificate, you can use a command like Get-ChildItem Cert:CurrentUser\My\

This command will list all certificates your Personal store.

Locate your certificate in the list and run $cert = (Get-ChildItem -Path Cert:CurrentUser\My\YOURCERTIFICATETHUMBPRINT)

Now we’re ready to sign your script! Run Set-AuthenticodeSignature .\Get-Truth.ps1 -Certificate $cert -TimestampServer http://tsa.starfieldtech.com

If the Status returns Valid, the signing was a success! You can open your script now and see a signature block at the end of your script.

Step 2: Trust your Signed Script

If you were to run this script now, you may be prompted: Do you want to run software from this untrusted publisher?

If you select Always run, your code signing certificate will be copied to the Trusted Publishers store. If you select Never run, your code signing certificate will be copied to the Untrusted Certificates store. You can see this in action by running Get-ChildItem Cert:\CurrentUser\TrustedPublisher or Get-ChildItem Cert:\CurrentUser\Disallowed.

This will work great on your machine, but other machines in your environment will still prompt about your script being untrusted. If you’d like other machines to treat you as a trusted publisher, export your certificate (no private key required). You could use Group Policy to install this certificate to the Trusted Publishers store on other machines.

Not only do you have an amazing script, now it’s signed (so it’s integrity is validated) and has your personal guarantee that it’s trusted (by any machines that have your code signing cert in their Trusted Publishers store)!

Quick Tip: Retrieve an Embedded OEM Windows Product Key

If you’re working on a device that has an embedded OEM Windows product key, you can run the following to retrieve it:

wmic path SoftwareLicensingService get OA3xOriginalProductKey

“That’s nice,” you say, “but can we do that in PowerShell?”

But of course!

(Get-CimInstance -ClassName "SoftwareLicensingService").OA3xOriginalProductKey

Workaround: PXE Boot your VMware Fusion VM

[Last Reviewed: 2019-02-20]

If you attempt to PXE boot a virtual machine using VMware Fusion, you might see this error:

Error Code: 0cx0000001


Some quick research showed that I’d have better luck using the ‘vmxnet3’ network adapter rather than the default ‘e1000e’ adapter. I had no problem adding the vmxnet3 to our WinPE image, but… it was Friday. Read-only Friday. How could I image this machine before the weekend?

Here’s a workaround you can try. It assumes that your WinPE image already has Intel Net drivers. We’ll add a second NIC. The default NIC, using e1000e, will be used during WinPE; the secondary NIC, using vmxnet3 will be used for PXE. Once your VM has been created and VMware Tools has been installed, you can remove the second NIC.

  1. Create your Virtual Machine as usual, then open your VM settings
  2. Add another network adapter (be sure they are both set to “Bridged Networking”)
  3. Close VMware Fusion, and navigate to where the VM is stored on disk (the default is ~\Virtual Machines). Right-click on your Virtual Machine and select “Show Package Contents”.
  4. Find the .vmx file in that folder. Right-click and open it with your text editor of choice.
  5. Look for the line ‘ethernet1.virtualDev = “e1000e”‘ — change e1000eto vmxnet3
  6. Save your changes, close your text editor, then re-open VMware Fusion
  7. If your OSD process requires a TPM, now you can turn on Encryption and then add the TPM (don’t do this before you edit the .vmx file, otherwise it will be encrypted!)
  8. Under Startup Disk, select “Network Adapter 2” as your default
  9. Power on your VM and OSD away!
  10. Once OSD completes, install VMware Tools and then you can remove your 2nd network adapter.

Enjoy your new virtual machine!

Improving TeamViewer Aliases with ConfigMgr

[Last Reviewed: 2018-11-19]

Recently one of our technicians mentioned in passing how nice it would be if TeamViewer’s console showed usernames in addition to computer names, so searching would be easier.

That’s a fantastic idea–let’s use the TeamViewer API and ConfigMgr’s User Affinity to do this!

Prerequisites: Tokens, Please

Before we begin, we’ll need a script token from TeamViewer. Once you’ve generated this token, don’t share it with anyone! It’s essentially your password to make changes to your TeamViewer account, so keep it close.

(Also: if possible, try to use just one token for each application you create. In this case, use this token only for this script. It’s easy enough to generate new tokens, and it let’s you be a bit more granular with permissions.)

To create a token, log in to your TeamViewer web console, then click on your name at the top right and select Edit Profile. Click Apps. Now click Create script token.

Give it a nice name (like Update-TeamViewerAlias) and a description. The access level should be set to User. The only drop-down we need to change is Computers & Contacts — change this to View, add, edit and delete entries.

Click Save. Make a note of that token!

The ConfigMgr Magic

Now create a Configuration Item in ConfigMgr. The full Discovery and Remediation scripts are available on GitHub. Paste your token in line 2 of each of these scripts (replacing YOUR-TEAM-VIEWER-API-KEY)

The Discovery script performs the following steps:

  • Uses TeamViewer’s ping API to make sure the service is available
  • Checks to see if the computer the script is running on has the username in it’s TeamViewer alias

For our compliance condition, we want the value of the script to return “true” (and run the remediation script if needed)

The Remediation script performs the following steps:

  • Uses TeamViewer’s ping API to make sure the service is available
  • Downloads the list of all TeamViewer computers, then uses the Remote Control ID (which is local) to learn the Device ID (which is not local)
  • Grabs ConfigMgr’s user device affinity information for the machine
  • Uses TeamViewer’s device API to update the alias (and update the description with the current date).

The Finale

Deploy this baseline to your TeamViewer machines (I use a schedule of once per week, but you can season to taste).

Before, with just the computer name

And after–now you can search for computers with a username!
Want to add more lines in the Description field? Just use `n to separate lines:

description = "Automated for Awesome with PowerShell`nLast Updated: $(Get-Date -UFormat %Y-%m-%d)"

When adding details to the Description field, just take note that you can’t searchagainst that field.

Now all your technicians need to assist your customers is a username–a win for everyone!

The Windows 7 MUI Black Screen of Death Spectacular, or “Why Can’t I Change Languages with DISM?”

[Last Reviewed: 2018-11-17]

Here’s a scenario: You’re a global organization looking to upgrade your remaining fleet of Windows 7 machines. Your fleet contains multiple locales–your Frankfurt location runs de-DE, and your Shanghai location runs zh-CN.

If you were to run an IPU on these machines using your en-US image, you’d discover pretty quickly that the IPU doesn’t complete. It’ll fail because the upgrade image’s UI language needs to match the host machine’s UI language.

There’s a great article written by Wilhelm Kocher that addresses this. In his solution, the en-US language pack is installed on the host machine, the default UI language is changed to en-US, then the IPU is completed. Afterwards, the host’s UI language is changed back to the original. It’s a clever solution that keeps your task sequences consistent, and saves space on your distribution points.

In testing this solution, I noticed a curious behavior. If I ran the IPU against a fresh Windows 7 SP1 install, the IPU was successful. If I ran the IPU against a fully-patched Windows 7 SP1 install, the machine would enter a “black screen of death” after the host machine’s UI language was changed to en-US. The only way to save the machine in this state was to spam F8 at boot and load a previously known-good configuration.

I opened a case with Microsoft on this, and a root cause was found! When the machine boots with the new UI language, the LSASS process fails prematurely because it can’t load the en-US strings from lsasrv.dll.mui for some well-known SIDs.

To resolve this issue, a new lsasrv.dll.mui for en-US was created that contains the missing strings.

The Resolution

This solution is provided “AS IS” with no warranties, and confers no rights. This solution is not officially supported.
This solution should be thoroughly tested before rolling out to your environment.

Preface: Windows 7 left mainstream support in 2015, so I’m eternally grateful that support worked with me to provide this solution. That being said, the modified lsasrv.dll.mui will likely not be provided in an official patch. I’ve uploaded the file I received from Microsoft to my GitHub repository, and you can download it from here: https://github.com/altrhombus/lsasrv-mui

To resolve this issue, all we have to do is replace the lsasrv.dll.mui that is placed in C:\Windows\system32\en-US after the language pack is installed. This should be done after the computer has been rebooted into Windows PE, but before the UI Language is changed.

In this task sequence, lsasrv.dll.ui was downloaded to C:\lsasrv prior to the reboot (to allow internet-based users to run the task sequence). During the “Update lsasrv.dll.mui” step, we run the following:

cmd.exe /c copy /Y %_OSDDetectedWinDrive%\lsasrv\lsasrv.dll.mui %_OSDDetectedWinDir%\system32\en-us

And that’s it! Now you can IPU your Windows 7 installation base regardless of the UI language–all while saving space on your distribution points.