Password Complexity Page using Azure B2C and Power Pages

Currently working on a project, and my UX team asked if it was possible to change the look of the B2C sign-up / password change page to include visual hints to meet the password complexity requirements. We’ve all seen it before, you visit a site where you need to sign up, and the password needs to be X characters long and contain this and that, but some sites include a cute visual to help identify what requirements have been met.

image borrowed from jQuery Script


Articles and blog posts I used to get this working:
1. Customize the Azure AD B2C user interface for portals
2. Enable JavaScript and page layout versions in Azure Active Directory B2C
3. JS Password Validation
4. Customize the look and feel of your Azure AD B2C page

If you read the B2C documentation, it’s strongly noted not to use JS libraries outside of the libraries native to B2C. I opted to keep my solution as simple as possible to avoid additional security gaps.

To get this working, I followed the steps outlined in link 1. There I created all of the needed assets in the Portal Management section of the Power Pages environment. Next, I used the content from link 3 to update the Web Template that I created in the previous step. After that, I updated the Web Template to include the div noted in link 4; this is extremely important and can’t be skipped. The last part of the process is to update the B2C user flow policy to reference the page created in step 1.

Here is a copy of my Web Template file from Portal Management.

<!DOCTYPE html>
<html>
<head>
<style>
      /* Style all input fields */
      input {
        width: 100%;
        padding: 12px;
        border: 1px solid #ccc;
        border-radius: 4px;
        box-sizing: border-box;
        margin-top: 6px;
        margin-bottom: 16px;
      }

      /* Style the submit button */
      input[type="submit"] {
        background-color: #04aa6d;
        color: white;
      }

      /* Style the container for inputs */
      .container {
        background-color: #f1f1f1;
        padding: 20px;
      }

      /* The message box is shown when the user clicks on the password field */
      #message {
        display: none;
        background: #f1f1f1;
        color: #000;
        position: relative;
        padding: 20px;
        margin-top: 10px;
      }

      #message p {
        padding: 10px 35px;
        font-size: 18px;
      }

      /* Add a green text color and a checkmark when the requirements are right */
      .valid {
        color: green;
      }

      .valid:before {
        position: relative;
        left: -35px;
        content: "✔";
      }

      /* Add a red text color and an "x" when the requirements are wrong */
      .invalid {
        color: red;
      }

      .invalid:before {
        position: relative;
        left: -35px;
        content: "✖";
      }
    </style>
</head>
<body>
 <!--this div is the most important part of the process--> 
   <div id="api"></div>
    <div id="message">
      <h3>Password must contain the following:</h3>
      <p id="letter" class="invalid">A <b>lowercase</b> letter</p>
      <p id="capital" class="invalid">A <b>capital (uppercase)</b> letter</p>
      <p id="number" class="invalid">A <b>number</b></p>
      <p id="length" class="invalid">Minimum <b>8 characters</b></p>
    </div>
    
    <script>
      var myInput = document.getElementById("password");
      var letter = document.getElementById("letter");
      var capital = document.getElementById("capital");
      var number = document.getElementById("number");
      var length = document.getElementById("length");

      // When the user clicks on the password field, show the message box
      myInput.onfocus = function () {
        document.getElementById("message").style.display = "block";
      };

      // When the user clicks outside of the password field, hide the message box
      myInput.onblur = function () {
        document.getElementById("message").style.display = "none";
      };

      // When the user starts to type something inside the password field
      myInput.onkeyup = function () {
        // Validate lowercase letters
        var lowerCaseLetters = /[a-z]/g;
        if (myInput.value.match(lowerCaseLetters)) {
          letter.classList.remove("invalid");
          letter.classList.add("valid");
        } else {
          letter.classList.remove("valid");
          letter.classList.add("invalid");
        }

        // Validate capital letters
        var upperCaseLetters = /[A-Z]/g;
        if (myInput.value.match(upperCaseLetters)) {
          capital.classList.remove("invalid");
          capital.classList.add("valid");
        } else {
          capital.classList.remove("valid");
          capital.classList.add("invalid");
        }

        // Validate numbers
        var numbers = /[0-9]/g;
        if (myInput.value.match(numbers)) {
          number.classList.remove("invalid");
          number.classList.add("valid");
        } else {
          number.classList.remove("valid");
          number.classList.add("invalid");
        }

        // Validate length
        if (myInput.value.length >= 8) {
          length.classList.remove("invalid");
          length.classList.add("valid");
        } else {
          length.classList.remove("valid");
          length.classList.add("invalid");
        }
      };      
</script>
</body>
</html>

The idea behind this was to keep it as simple as possible and to get a basic example created. Yes, you can store the file in blob storage, but I wanted to keep all portal parts close together and avoid added complexity. (not that creating this page in Portal Management was easy)

How to run a Databricks Notebook using Power Automate

Part of a project I was working on required mashing up some data from SharePoint with data stored in datalake. We settled on creating a Databricks notebook to read an input file, query data lake using the input file, and then export an enriched file.

Here’s a high-level overview of what’s going to be created:

Call the notebook, parse the JSON response, loop until the notebook has finished, then respond to the notebook’s output.

In my case, triggering the notebook will require knowing its URL, bearer token, job id, and input parameters.


Parse the response from the HTTP call:


The notebook will take a little time to spin up, then process the input file. The best way to handle this is to leverage a basic do-until loop to check the status of the notebook job. I opted to use a one-minute delay, call the API to get the job status, parse the response, then evaluate if it’s finished.


One thing to note about the do until action, you don’t want it to run for eternity, and to avoid adding complexity to it, you don’t want to add extra evaluations like: if looped X times, stop
If you expand the Change limits option, you can set how many times it loops or change the duration. Here I’ve set the action to stop looping after 20 tries. For more info on this, please check SPGuides for a detailed overview.

The last step in the flow is to process the response from the notebook. If the job is success(full), get the file from blob storage and load it to SharePoint; otherwise, create a Slack alert.

That’s it; using the example above, you can trigger a Databricks notebook using a Flow.

Authentication
When I set this up, my company allowed the use of Personal Access Tokens (PAT).
https://docs.databricks.com/dev-tools/auth.html#pat
The PAT was then used in the Flow to trigger the notebook.

Use Power Automate to Create Jira Tasks

I’m working on a Power Pages project that requires a Jira service desk task to be created for each portal submission. Out of the box, Jira provides a simple connector to create tasks and requests, but the connect falls short of handling field types other than simple text. This means choice, checkbox, and dropdown fields are not available. This only leaves a couple of options, and I opted to use a simple HTTP action to create the tasks.

Basic overview of what I’ll be creating:
Flow that’s triggered by a dataverse row creation
Create a Jira task and populate metadata
Attach a file to the Jira task

Jira fields and types:
Issue Type – Choice
Request Type – Choice
Tortilla – Choice
Meat – Choice
Veggies – Checkbox multi-select
Number of Tacos – Number
Pickup Date Time – Date and Time
Summary – Text
Attachment – Attachment

Interfacing with the Jira API requires knowing a little about the fields you’ll be updating and the project and issue type you want to use. If you haven’t created one already, you need a Jira API token to work with the API.


Request type:
Go to Project Settings, then look at the URL and copy the value after pid=
https://taco.atlassian.net/secure/project/EditProject!default.jspa?pid=10001

With the ID, you can query the service desk request-types endpoint
https://taco.atlassian.net/rest/servicedesk/1/servicedesk/request/10001/request-types
In the returned payload, note the portal key and key values; combine the two, and you have the request type value tr/9f7c4029-6d23-4cb1-bb8a-02d0050d944b

Project key:
The project key is available on the project settings page, listed under the name field.
Example: TACOS

Issue type:
For simplicity, I’m only dealing with one issue type, and I captured the issueType value using the request-types endpoint noted above.
Example: “issueType”: 10015

For the remaining field values, you can get them in one of two ways.
Create a new issue in the browser, then use the browser developer tools (F12 or Ctrl + Shift + I) to inspect each field’s HTML value.

The other option is to click the gear icon (top right), select Issues, click on Custom Fields, search for a field, click on it, click Edit detail, and then grab the ID value from the URL. Once the ID is captured, join it with customfield_, resulting in customfield_10073, which is the field’s internal value.

In this example, the summary issue type fields are the only ones that do not have a customfield_X naming convention. It might be possible that some system-generated fields have a different naming convention, but I’ll dig into that another day.

Column Display NameColumn Internal NameColumn Type
Issue Typeissuetypesystem
Request Typecustomfield_10010system
Tortillacustomfield_10073Select List (single)
Meatcustomfield_10074Select List (single)
Veggiescustomfield_10075Checkboxes
Number of Tacoscustomfield_10076Number Field
Pickup Date Timecustomfield_10077Date Time Picker
Summarysummarysystem

Endpoint URL:
https://taco.atlassian.net/rest/api/3/issue/
Headers: {“Content-Type”: “application/json”}
Authentication: Raw
Key: Basic aWhddsfadfafa..NOT…A…REAL…KEY..dafdfdafd=
Example payload:

{
  "fields": {
    "project": {
      "key": "TACOS"
    },
    "customfield_10010": "tr/9f7c4029-6d23-4cb1-bb8a-02d0050d944b",
    "summary": "Taco order summary",
    "issuetype": {
      "id": "10015"
    },
    "customfield_10073": {"value": "Flour"},
    "customfield_10074": {"value": "Chicken"},
    "customfield_10075": [{"value": "Pico"},{"value": "Grilled Veggies"}],
    "customfield_10076": 2,
    "customfield_10077":"2022-11-05T11:05:00.000+0000"
  }
}

View of the task in Jira

How do you attach a file to a Jira task using Power Automate?
Attaching a file to a Jira task requires one more API call, and it’s simple!

Endpoint URL:
https://taco.atlassian.net/rest/api/3/issue/Key/attachments
Headers: {“X-Atlassian-Token”: “no-check”}
Authentication: Raw
Key: Basic aWhddsfadfafa..NOT…A…REAL…KEY..dafdfdafd=

Example payload:

{
  "$content-type": "multipart/form-data",
  "$multipart": [
    {
      "headers": {
        "Content-Disposition": "form-data; name=\"file\"; filename=@{outputs('Get_file_properties')?['body/{FilenameWithExtension}']}"
      },
      "body": @{body('Get_file_content')}
    }
  ]
}

Attachment
I’m getting a file from SharePoint and passing its contents to the API call for the attachment. The same thing works with Azure blob storage or grabbing a file from the dataverse. If you want to attach more than one file, create additional HTTP attachment calls.

Here’s a simple overview of the Flow:

Parse JSON schema:

{
    "type": "object",
    "properties": {
        "id": {
            "type": "string"
        },
        "key": {
            "type": "string"
        },
        "self": {
            "type": "string"
        }
    }
}

How to Audit Power Platform and SharePoint

This post will be an ongoing adventure into using Microsoft Purview to audit, track, review, and learn about updates to objects within the Power Platform. My adventure into this tool was prompted by my in-house security teams asking if I could help identify if a specific SharePoint list had been viewed and who viewed it. In SharePoint on-prem, this sort of info could be mined in a site, but with SharePoint Online, the auditing is offloaded to Purview.

To kick things off, I will run a report to see who has accessed my SharePoint Dev site this week. From the audit page, you can set a date range for your search and select activities like deleting a file or adding someone to a group; for the file, folder, or site box, you enter the site you want to target. Last but not least is the user’s box; this one is self-explanatory.

Search results are ready for viewing:

The results show that a user created a list item and then viewed the list a few times.


The audit logs are held for ~90 days; outside variables can impact this. Here is a warning if you try to search for items older than 90 days:

Audit log retention policies might impact search results. Activities that happened over 90 days ago will only show up in results for users who have licensing for long-term audit log retention.

That’s it for now; as you can see, this tool can be extremely valuable, especially when dealing with audits or if data magically goes missing.

Future updates to this article will show how to track changes to SharePoint lists, dataverse objects, Power BI, Power Automate (Flow), and more!

URL to access the compliance center / Purview: https://compliance.microsoft.com/auditlogsearch

Here is a new post showing how to search a single library in SharePoint:
https://www.sharepointed.com/2024/03/sharepoint-audit-using-purview/

Azure Runbook Job Name error: Token request failed..Exception

When you move from a SharePoint on-prem environment to SharePoint Online, you lose the server-side environment you’d normally use to run PowerShell scripts or tasks to interact with SharePoint. In my opinion, and please correct me if I’m wrong, the closest thing to a server-side environment in a cloud environment is Azure Runbooks or Azure Function Apps. I went with Azure Runbooks due to its ability to handle long-running tasks.

The error I recently encountered in my runbook was: runbook name error: Token request failed..Exception . At first, I thought there might be something wrong with the way I was connecting to Keyvault, but that wasn’t it. Next was my connection to SharePoint, this is handled using a SharePoint-generated client ID and secret. Oddly enough, I had just updated this a few months back, so it wasn’t an obvious candidate for a failure point.

I went to my target SharePoint site, created a new set of credentials using siteName/_layouts/15/AppRegNew.aspx and siteName/_layouts/15/appinv.aspx. After creating the credentials, I went back to the runbook and plugged them in, and it worked!

Long story short, if you get this error: Token request failed..Exception try creating a new client ID and secret and see if it helps clear things up.

You can also use this script to test your client id and secret. Connect-PnPOnline | PnP PowerShell

$siteUrl = "https://taco.sharepointonline/sites/burrito"
$testConn = Connect-PnPOnline -Url $siteUrl -AppId "1111-2222-3333-4444-555555555555" -AppSecret "X3tssvCebdl/c/gvXsTACOajvBurrito=" -ReturnConnection
$list = Get-PnPList "Tacos"
Write-Output $list

Power App Portal and Azure B2C

This post is a mental brain dump of the issues I encountered when configuring Azure B2C in a new Power App Portal or Power Pages environment.

Error when trying to login via B2C:
.powerappsportals.com/signin-aad-b2c_1
Page not found
Open the Site Settings, filter the page to only display settings for the Website you are working with, then search for: authentication/openidconnect/

You will see all of the items related to the B2C items you set up on the front end. I’ve noticed, for whatever reason, that the settings don’t always sync with the backend.

To fix the Page not found error I switched Authentication/OpenIdConnect/AAD-B2C_1/RegistrationEnabled to true . Open the portal in a new private browser and try logging in again.

Power App Portal authentication settings not syncing.
If you create a new B2C user flow and want to update your portal settings, save yourself time and open your portal site settings, search for authentication/openidconnect/ and update the setting there. For some reason, updating portal settings in the settings page don’t always sync to the backend.


Failed to create/update site setting for default provider.
If you get this error, try setting a different provider as the default, then try your original selection again.

If you log in via B2C and see this, check the setting in the screenshot, and purge your cache.
Registration is disabled
Invalid sign-in attempt.

Use Python to Query a LARGE SharePoint list.

When querying a SharePoint list that has more than 5,000 items, you’ll likely receive an error like this:

This view cannot be displayed because it exceeds the list view threshold (5000 items) enforced by the administrator. 


Microsoft.SharePoint.SPQueryThrottledException', 'The attempted operation is prohibited because it exceeds the list view threshold.', "500 Server Error: Internal Server Error for url

Or, your query will only return the default 100 items. To get around this, pagination can be used to query the list and return ALL of the items.
Example:
all_items = list_to_export.items.paged(1000).get().execute_query()

Full script using VS Code:

from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext


app_settings = {
    'url': 'https://taco.sharepoint.com/sites/queso/',
    'client_id': 'ID here',
    'client_secret': 'shhhh its a secret',
}

context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'], client_secret=app_settings['client_secret'])

#connect to the site
ctx = ClientContext(app_settings['url'], context_auth)
ctx.execute_query()

#get the target list
list_title = "List of Tacos"
list_to_export = ctx.web.lists.get_by_title(list_title)

#get all of the list items
all_items = list_to_export.items.paged(1000).get().execute_query()
list_items = [item for item in all_items]

print("Item count: {0}".format(len(list_items)))

The example above connects to a SharePoint site using a client ID and secret, then queries the list. Again, the key here is using pagination (paged). You can adjust the page size to better fit your needs, but be sure to leave it under 5,000, or you will be back to square one.

Filter a Power BI Source Before Importing

This has bugged me for a long time, and I wasn’t sure how to get around it. I was tasked with creating a Power BI report using data from a reasonably big data set, but the report only displayed a really small part of it. The source is in the neighborhood of 10,000,000 rows, and my report displayed roughly 100,000. Why try to direct query, import, or incrementally refresh all of the data when such a small part of the data is needed? I was looking for a way to filter the data set before you add the applied steps in the query editor. Something like: Select * from Orders where Status = Open

The answer I was looking for is Native Query, and I’m not sure how I completely failed to learn about this until now.

For this example, I’m using the Northwind Traders orders dataverse table. Connect to the dataverse environment, select the orders table, then open the Advanced Editor window. Change the query to this:

let
    Source = CommonDataService.Database("taco.crm.dynamics.com"),
    superQuery = Value.NativeQuery(Source, "Select * from nwind_orders")
in
    superQuery

All I’m doing here is checking to see if the newly formed query is working. Next, right-click on the step where you created the Native Query in the Applied Steps window and select view Native Query. Update the query only to show New orders. This can be done by adding a Where clause:

Select * From nwind_orders Where nwind_orderstatusidname = 'New'

Click Ok, and the query should refresh, only showing new orders.

let
    Source = CommonDataService.Database("taco.crm.dynamics.com"),
    superQuery = Value.NativeQuery(Source, "Select * From nwind_orders Where nwind_orderstatusidname = 'New'")
in
    superQuery

That’s all it takes to filter the data set before transforming it! Be sure to check the linked documentation and take a look at query folding.

Power App Unable to add flow

I was in the process of porting a production Power App to a dev environment, and I ran into this error.

Unable to add flow
There was a problem adding your service. Please try again later.

The problem was that I tried to add an existing Flow to my Power App, but it was turned off.



Navigate to make.powerapps.com, locate the Flow you are trying to add to the Power App, turn it on, then try adding to the app.

Purge Cache in Power App Portals

This post will be updated to link to a future post that outlines all the different Power App Portals areas. If you have spent more than a few minutes in Portals, you will quickly notice that there is no single place to update or change settings.

How do you purge the cache for a portal?
Navigate to the Power Platform admin center
https://admin.powerplatform.microsoft.com/
Click on Environments and select your environment
Select the portal, then click Purge Cache in the ribbon

edit
The purge cache button won’t always pop up the modal asking if you want to purge the cache. If you visit this page, select your portal site, click the Purge Cache button, and the modal will appear.

https://admin.powerplatform.microsoft.com/resources/portals