Over the years, some updates to Flow have been better than others, and others, not so much. If memory serves, the send an email action would use dynamic hyperlinks without much work, but something went sideways with one of the updates causing dynamic hyperlinks not to work as you’d want.
Here is a basic example of including a hyperlink in an outgoing email; further down the page, I’ll provide a more realistitc example.
In this example, I setup the Flow to trigger when an item is added to a SharePoint library. The key thing to note in this example is the value of the varHyerplink variable; note the double quotes around the link to item.
Using the SharePoint HTTP flow action to update a person or group field, I kept getting this error:
A 'PrimitiveValue' node with non-null value was found when trying to read the value of a navigation property; however, a 'StartArray' node, a 'StartObject' node, or a 'PrimitiveValue' node with null value was expected.
The field I was attempting to update is named Submitted By, with an internal name of Submitted_x0020_By. Each time I tried to update the field I was seeing the error noted above. It wasn’t until I looked at one of my previous flow runs did I notice what the issue was. It turns out, that the field name I should be using is Submitted_x0020_ById.
Update flow:
How do you update a Person field if the field allows for multiple selections? The example below will update the field with two different user values, but clearly, this could be extended to be more dynamic.
Using Flow to get or check for files in a large SharePoint library can be a little tricky. If you are sure your library will always stay under 5,000 items the Get Files (properties only) Flow action is a quick n’ simple approach to use. When your library crosses over the mythical 5k mark or somewhere in that neighborhood, the Get Files action will fail to return results without warning. What I’m outlining below are other options when working with large libraries.
Setup:
Option 1: Get Files using an Odata filter query downside: only use this in small libraries
Result:
Option 2: use the SharePoint API downside: the lack of transparency from Microsoft related to how often data is crawled.
Result:
Option 3: use the SharePoint API along with a filter action on the library. This option does require that you have metadata set up on the library to filter on and there is not a wildcard / contains option. The filtered value needs to be exact. downside: you will need to set up your metadata ahead of time or create it after the fact then backfill.
Result:
The more I learn about Flow and SharePoint Online, the more I’m starting to like Option 3 when doing a lookup type of search. SharePoint Search is an extremely powerful tool if the search index is fresh.
from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
app_settings = {
'url': 'https://YOURtenant.sharepoint.com/sites/somesite/',
'client_id': '12344-abcd-efgh-1234-1a2d12a21a2121a',
'client_secret': 'Oamytacohungry234343224534543=',
}
context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'], client_secret=app_settings['client_secret'])
ctx = ClientContext(app_settings['url'], context_auth)
web = ctx.web
ctx.load(web)
ctx.execute_query()
response = File.open_binary(ctx, "/Shared Documents/Invoice.pdf")
with open("./Invoice.pdf", "wb") as local_file:
local_file.write(response.content)
If the above script does not work, step back and ensure you are connected to the site. The following script connects to a site and outputs its title. This is useful to validate that a site connection can be made.
from office365.runtime.auth.authentication_context import AuthenticationContext
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
app_settings = {
'url': 'https://YOURtenant.sharepoint.com/sites/somesite/',
'client_id': '12344-abcd-efgh-1234-1a2d12a21a2121a',
'client_secret': 'Oamytacohungry234343224534543=',
}
context_auth = AuthenticationContext(url=app_settings['url'])
context_auth.acquire_token_for_app(client_id=app_settings['client_id'], client_secret=app_settings['client_secret'])
ctx = ClientContext(app_settings['url'], context_auth)
web = ctx.web
ctx.load(web)
ctx.execute_query()
print("Site title: {0}".format(web.properties['Title']))
You can also use a certificate and thumbprint to connect to SPO via an Azure App registration.
SharePlum connection example using a username and password to connect to SharePoint Online. More details about SharePlum can be found here: https://github.com/jasonrollins/shareplum
from shareplum import Site
from shareplum import Office365
sharepoint_url = 'https://YOURtenant.sharepoint.com/sites/spdev'
username = 'You@YourDomain.com'
password = 'Password'
authcookie = Office365('https://YOURtenant.sharepoint.com',
username=username,
password=password).GetCookies()
site = Site('https://YOURtenant.sharepoint.com/sites/DEV/',
authcookie=authcookie)
sp_list = site.List('Your List')
data = sp_list.GetListItems('All Items', row_limit=200)
If you get this error, you won’t be able to connect with a username and password, and you’ll need to use an App Password.
File “C:\Python311\Lib\site-packages\shareplum\office365.py”, line 80, in get_security_token raise Exception(‘Error authenticating against Office 365. Error from Office 365:’, message[0].text) Exception: (‘Error authenticating against Office 365. Error from Office 365:’, “AADSTS50076: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access ”.”)
Scenario: Each day I have a couple of Azure Runbooks export SharePoint list items and upload them to a SharePoint library. If one of the Runbooks fails, I needed to send an email alert that something went wrong.
Basic logic: If files created today in SharePoint <> X, send an email.
The easy solution would have been to loop through the files, check their created date, increment a variable, then make a condition statement.
More-better way: Run flow daily at 6:00 PM Send an HTTP request to SharePoint to get files Parse the response Condition statement — if true, send an email
Edit – If you want to search for files created before or after a date, you can adjust the API like this: and created %3E 2021-12-12T19:07:51.0000000Z This will fetch any files created after Dec 12th 2021. The unicode for greater than is %3E and less than is %3C
Out of the box, if you upload a document to SharePoint and tag metadata to it, that data is attached to the document properties. This is the case for most all column types you create in a library, and on random occasions, Document IDs are copied.
If you are working with SharePoint on-prem you can disable this functionality at the SPWeb level, but that’s not the case with SharePoint Online.
Example of what I was seeing in my sample library: Create a new library titled DocLib Added a text column titled TacoFiller Uploaded an Excel file, then populate TacoFiller with the word Beans
Downloaded the file and look at the properties. File –> Info –> All Properties (Excel o365)
Notice that property TacoFiller is set to Beans.
Create a local copy of the file, then upload it to DocLib.
SharePoint has read the TacoFiller property from the Excel file and applied the data to the column in the library.
Short of doing a document migration, I have NO idea why you’d want this enabled out of the box. If anything, this should be disabled, then allow admins to enable as needed. /rant
TL;DR / how-to disable this functionality: First, you can only disable this at the library level!
A couple of notes about the script. You can download the SharePoint DLLs from Microsoft: link In my tenant, using Get-Credential requires that I use my company email address and an App Password. This may/not be the case with your tenant. App Password info: link
And… after running the script, upload another copy of the Excel file to the library.
The TacoFiller property is no longer being extracted from the Excel file!
As of today, there is not a Logic App trigger for Azure File Storage, so I went with a schedule-based approach. Yes, this example leaves out a lot of fine-tuning, but it will get you headed in the right direction.
Create a blank Logic app Trigger: Schedule Action: Azure File Storage – List files Action: SharePoint – Create file
After you add the SharePoint action, the Logic App should automatically add a For Each action and place the SharePoint Create File action inside of it.
In the last screenshot, I tested the Logic App by uploading a couple of documents in Azure Storage Explorer, then I manually ran the Logic App (click the Run button).
Again, this is a simple example. The example does not account for processing the same files over and over…