How Do You Get a Power Page Attachment That’s Stored in Blob Storage

My Power App Portal (Power Pages) environments are configured to use Azure blob storage for form attachments. One of the primary reasons for doing this is to avoid filling up expensive dataverse storage with endless attachments submitted by enduers.

This article outlines how to set up Azure storage: link

What I’m going to demo is how to get ONE attachment that’s uploaded to a form. If your form allows multiple attachments, you’d simply loop through them.

In the example, I’m using the soon-to-be-obsolete dataverse connector, but the same basic flow design applies to the normal connector.

When a row is added to my table, the flow is triggered.
The flow then queries the Note (annotation) table using the ID from the source table.
filter query: (_objetid_value eq souce_table_id)

The list rows notes query will result in an array being returned, but I’m only dealing with one attachment, so there’s no need to loop through it. To avoid an unnecessary loop, a function can be used to target a single object from the array: first(body(‘List_rows_Notes’)?[‘value’])?[‘annotationid’]

From the Get row note action, annotationid and filename will be needed to help form the path to the blob. Using the concat function I’m combing the container name, annotationid, and filename. Also, note the transformation on annotationid, the hyphens need to be removed, and the string needs to be lowercase. The last part of the transformation is to remove .azure.txt from the filename.

concat('/blobcontainer/',toLower(replace(outputs('Get_row_Note')?['body/annotationid'], '-', '')),'/', split(outputs('Get_row_Note')?['body/filename'], '.azure.txt')[0])

The end result of the transformation will be:
/blobcontainer/annotationid/filename /blobcontainer/cf03e4cf7f72ad118561002248881923/example.pdf

With the path to the blob formed, the get blob content action can retrieve the file.

It’s that simple.

A couple of notes:
It would be wise to leverage a virus-scanning tool like Cloudmersive.
If you haven’t already noticed, when a user uploads a file that contains special characters in the name…it’s saved to the Note table without the special characters, but when it’s moved to blob storage, the characters will be in the name. Yes, that’s a bug Microsoft has yet to fix. You can avoid this by adding Javascript to the upload page to block files that fall into this category. OR. Write another flow to clean file names before the form is processed.
Example:
Uploaded filename: my report 1:2:3.pdf
Note table: my report 123.pdf
Blob: my report 1:2:3.pdf

Azure Runbook Job Name error: Token request failed..Exception

When you move from a SharePoint on-prem environment to SharePoint Online, you lose the server-side environment you’d normally use to run PowerShell scripts or tasks to interact with SharePoint. In my opinion, and please correct me if I’m wrong, the closest thing to a server-side environment in a cloud environment is Azure Runbooks or Azure Function Apps. I went with Azure Runbooks due to its ability to handle long-running tasks.

The error I recently encountered in my runbook was: runbook name error: Token request failed..Exception . At first, I thought there might be something wrong with the way I was connecting to Keyvault, but that wasn’t it. Next was my connection to SharePoint, this is handled using a SharePoint-generated client ID and secret. Oddly enough, I had just updated this a few months back, so it wasn’t an obvious candidate for a failure point.

I went to my target SharePoint site, created a new set of credentials using siteName/_layouts/15/AppRegNew.aspx and siteName/_layouts/15/appinv.aspx. After creating the credentials, I went back to the runbook and plugged them in, and it worked!

Long story short, if you get this error: Token request failed..Exception try creating a new client ID and secret and see if it helps clear things up.

You can also use this script to test your client id and secret. Connect-PnPOnline | PnP PowerShell

$siteUrl = "https://taco.sharepointonline/sites/burrito"
$testConn = Connect-PnPOnline -Url $siteUrl -AppId "1111-2222-3333-4444-555555555555" -AppSecret "X3tssvCebdl/c/gvXsTACOajvBurrito=" -ReturnConnection
$list = Get-PnPList "Tacos"
Write-Output $list

Create Approvals That NEVER Expire

If you are reading this, you likely ran into an issue where you created an approval flow, but it expired before the recipient had time to approve or reject it. The timeout for an approval or any flow is thirty days; then, it stops running. Yes, there are some clever workaround to alert if the flow times out, but who wants to mess with that?

The approach I took to solve this was to leverage some of the existing tooling, then add to them. When you create an approval, a row is created in the dataverse Approval table. As we all know, a flow is trigger-based, so why not create one that simply monitors the Approval table, then handles things from there?

At a high level, here is the basic approach.

Start by creating a simple flow that initiates an approval, then run it. In my example, note the value in the Item Link field; this will come into play later.

Next, navigate to make.powerapps.com, expand the Dataverse section, and click on Tables. After the page loads, click the All link under Tables, then search for approval. If you search for approval and do not get a result, make sure you click the All link.



Open the Approval table; in it, you will see your approval, possibly more depending on how old your environment is or if many people in your company are using approvals. When looking at the data, the takeaway is what is stored in the table and what can be used in the flow that handles the outcome of the approval. In my case, using the Item Link field is key to handling the approval response. With it, I can filter the value and know if I need to take action on the item or not.

When creating the flow that responds to the approval, you can filter it at the design level or in the trigger settings. I went with the trigger setting due to the number of approvals that could be firing across my organization in our default tenant. Why do you need to filter it? Just assume other approvals might be writing to the same dataverse table.

Trigger Conditions

@contains(triggerBody()?['msdyn_flow_approval_itemlink'],'https://www.sharepointed.com/stuff/')

@not(equals(triggerBody()?['msdyn_flow_approval_result'], null))

The above conditions filter the value I passed in the create approval flow (Item Link) and if the item has been approved or rejected.

Here is an overview of the flow that handles the outcome of the approval. I mixed dataverse connector types due to an issue with the trigger condition not working with the green dataverse connector. In the Expand Query field, I used the Fetch XML builder to query over to the Approval Response table to get the comment field; not used in the example, but nonetheless, it’s there. From the Get a row by ID action, the response of the approval is available to use to handle the outcome (Result) of the approval.

To my knowledge, there is no reason why you can’t create an approval that is active for months, if not years.

Notes:
1) You can access and review the approval records using PowerBI, Flow, Access, ___
2) You can bulks update the records using PowerShell, Flow, Access (be real careful), __
3) You can pass items in the Details field, then parse them out when handling the approval. Here is one simple example where I’m passing a SharePoint item ID from the approval and parsing it in the response flow:



Response flow compose statement that parses the Details field.

Expression: last(split(triggerBody()?['msdyn_flow_approval_details'],'**SPItemID:** '))




YES, this is a lot, but the general idea is simple; create an approval and handle the response.

Create Dynamic Hyperlinks And Send An Email Action

Over the years, some updates to Flow have been better than others, and others, not so much. If memory serves, the send an email action would use dynamic hyperlinks without much work, but something went sideways with one of the updates causing dynamic hyperlinks not to work as you’d want.

Here is a basic example of including a hyperlink in an outgoing email; further down the page, I’ll provide a more realistitc example.


In this example, I setup the Flow to trigger when an item is added to a SharePoint library. The key thing to note in this example is the value of the varHyerplink variable; note the double quotes around the link to item.

Use Power Automate to Update a SharePoint Person Field

Using the SharePoint HTTP flow action to update a person or group field, I kept getting this error:

A 'PrimitiveValue' node with non-null value was found when trying to read the value of a navigation property; however, a 'StartArray' node, a 'StartObject' node, or a 'PrimitiveValue' node with null value was expected.

The field I was attempting to update is named Submitted By, with an internal name of Submitted_x0020_By. Each time I tried to update the field I was seeing the error noted above. It wasn’t until I looked at one of my previous flow runs did I notice what the issue was. It turns out, that the field name I should be using is Submitted_x0020_ById.



Update flow:

How do you update a Person field if the field allows for multiple selections? The example below will update the field with two different user values, but clearly, this could be extended to be more dynamic.

body('Send_an_HTTP_request_to_SharePoint_User_1')?['d']?['Id']
body('Send_an_HTTP_request_to_SharePoint_User_2')?['d']?['Id']

concat('[',outputs('Compose_1'),',',outputs('Compose_2'),']
{
    "__metadata": {
        "type":"SP.Data.AssignedToListListItem"
    },
    "SubmittedByIDsId": {
         "results": [
                 6,
                 54
          ]
    }
}

Flow and SharePoint Online Large Libraries

Using Flow to get or check for files in a large SharePoint library can be a little tricky. If you are sure your library will always stay under 5,000 items the Get Files (properties only) Flow action is a quick n’ simple approach to use. When your library crosses over the mythical 5k mark or somewhere in that neighborhood, the Get Files action will fail to return results without warning. What I’m outlining below are other options when working with large libraries.

Setup:



Option 1: Get Files using an Odata filter query
downside: only use this in small libraries



Result:


Option 2: use the SharePoint API
downside: the lack of transparency from Microsoft related to how often data is crawled.

Result:

Option 3: use the SharePoint API along with a filter action on the library. This option does require that you have metadata set up on the library to filter on and there is not a wildcard / contains option. The filtered value needs to be exact.
downside: you will need to set up your metadata ahead of time or create it after the fact then backfill.

Result:

The more I learn about Flow and SharePoint Online, the more I’m starting to like Option 3 when doing a lookup type of search. SharePoint Search is an extremely powerful tool if the search index is fresh.

Download a File From SharePoint Online Using Python

How do you download a file from a SharePoint Online library using Python?

Items needed to run the script in this example:
Office365 Rest Python Client library:
https://pypi.org/project/Office365-REST-Python-Client/
SharePoint App Only Client Id and Secret:
Microsoft documentation:
https://docs.microsoft.com/en-us/sharepoint/dev/solution-guidance/security-apponly-azureacs
You can create an app principle that is limited to a single site, list, library, or a combination of them:
https://piyushksingh.com/2018/12/26/register-app-in-sharepoint/

from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File

app_settings = {
    'url': 'https://YOURtenant.sharepoint.com/sites/somesite/',
    'client_id': '12344-abcd-efgh-1234-1a2d12a21a2121a',
    'client_secret': 'Oamytacohungry234343224534543=',
}

context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'], client_secret=app_settings['client_secret'])

ctx = ClientContext(app_settings['url'], context_auth)
web = ctx.web
ctx.load(web)
ctx.execute_query()

response = File.open_binary(ctx, "/Shared Documents/Invoice.pdf")
with open("./Invoice.pdf", "wb") as local_file:
    local_file.write(response.content)

If the above script does not work, take a step back and make sure you are connected to the site.

from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File

app_settings = {
    'url': 'https://YOURtenant.sharepoint.com/sites/somesite/',
    'client_id': '12344-abcd-efgh-1234-1a2d12a21a2121a',
    'client_secret': 'Oamytacohungry234343224534543=',
}

context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'], client_secret=app_settings['client_secret'])

ctx = ClientContext(app_settings['url'], context_auth)
web = ctx.web
ctx.load(web)
ctx.execute_query()

print("Site title: {0}".format(web.properties['Title']))

Use Flow to Get Files Created or Modified Today in a SharePoint Library

Scenario:
Each day I have a couple of Azure Runbooks export SharePoint list items and upload them to a SharePoint library. If one of the Runbooks fails, I needed to send an email alert that something went wrong.

Basic logic:
If files created today in SharePoint <> X, send an email.

The easy solution would have been to loop through the files, check their created date, increment a variable, then make a condition statement.

More-better way:
Run flow daily at 6:00 PM
Send an HTTP request to SharePoint to get files
Parse the response
Condition statement
— if true, send an email

Uri text from the HTTP call:

_api/search/query?querytext='Path%3Ahttps%3A%2F%2Fsharepointed.sharepoint.com%2Fsites%2Fsitename%2Fsubsite%2Fexports%2F*%20LastModifiedTime%3Dtoday'

Parse JSON schema

{
    "type": "object",
    "properties": {
        "odata.metadata": {
            "type": "string"
        },
        "ElapsedTime": {
            "type": "integer"
        },
        "PrimaryQueryResult": {
            "type": "object",
            "properties": {
                "RelevantResults": {
                    "type": "object",
                    "properties": {
                        "TotalRows": {
                            "type": "integer"
                        },
                        "TotalRowsIncludingDuplicates": {
                            "type": "integer"
                        }
                    }
                }
            }
        }
    }
}

Edit –
If you want to search for files created before or after a date, you can adjust the API like this:
and created %3E 2021-12-12T19:07:51.0000000Z
This will fetch any files created after Dec 12th 2021.
The unicode for greater than is %3E and less than is %3C

Power BI and SharePoint Person or Group Field

I created a simple Power BI report to pull data from a SharePoint list and quickly ran into a problem with the Person or Group column type.

Error:
Expression.Error: We cannot convert the value “” to type Table.
Details:
Value=
Type=[Type]

In the report, I was trying to expand a Person or Group column, but not all of the field values were populated.

Here is the fix:

let
    Source = SharePoint.Tables("https://taco.sharepoint.com/sites/food", [Implementation="2.0", ViewMode="All"]),
    #"abcc-b8fe-4b23-be01-abc5f2c3320c" = Source{[Id="abcc-b8fe-4b23-be01-abc5f2c3320c"]}[Items],
    #"Expanded Assigned User2" = Table.TransformColumns(#"abcc-b8fe-4b23-be01-abc5f2c3320c", {{"Assigned User", each if Value.Is(_,type list) then _{0} else [title = "not assigned"], type record}} ),
    #"Expanded Assigned Analyst" = Table.ExpandRecordColumn(#"Expanded Assigned User2", "Assigned User", {"title"}, {"Assigned User.title"})
in
    #"Expanded Assigned User"