Copilot Studio – Installing and using ready to use agents
Conduct your own lab, download: Contoso Travel Policies
Conduct your own lab, download: Contoso Travel Policies
Repository: https://github.com/MariuszFerdyn/hands-on-lab-azure-functions-flex-openai
Step by step:
# Clone the repository
git clone https://github.com/MariuszFerdyn/hands-on-lab-azure-functions-flex-openai# Login to Azure :
az login# Display your account details
az account show# Select your Azure subscription
az account set –subscription <subscription-id># Go to the project directory
cd <cloned-project-path># Authenticate using azd
azd auth login# Create resources using the IaC defined in the infra directory
azd provision# .azure/ignite.env
# Deploy Functions to Azure
azd env set AZURE_LOCATION eastus2 -e ignite2024mf –no-prompt
azd env refresh -e ignite2024mfazd deploy
# Post wav file to ST via function
# Update AudioTranscriptionOrchestration01.cs
azd deploy processor
# Update AudioTranscriptionOrchestration02.cs
azd deploy processor
# Post wav file to ST via function
Tools:
Introduction Copilot for SharePoint – especially Copilot for Documents in SharePoint Library.
Successor of Zero Trust Model descibed https://rzetelnekursy.pl/zero-trust-model-audit-for-free has a new version https://aka.ms/ztworkshop.
It includes:
All VM in Backup:
# Import necessary modules
# Import-Module Az
# Connect-AzAccount# Get all Recovery Services Vaults
$recoveryVaults = Get-AzRecoveryServicesVault# Initialize an array to hold backup items
$backupItems = @()# Enumerate all Recovery Services Vaults
foreach ($vault in $recoveryVaults) {
# Set the context to the current vault
Set-AzRecoveryServicesVaultContext -Vault $vault# Get all backup containers in the current vault
$containers = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVM# Enumerate all backup containers
foreach ($container in $containers) {
# Get all backup items in the current container for the specified workload type
$items = Get-AzRecoveryServicesBackupItem -Container $container -WorkloadType AzureVM# Add the backup items to the array, renaming the existing ContainerName property
$backupItems += $items | Select-Object @{Name=”VaultName”;Expression={$vault.Name}}, @{Name=”ResourceGroupName”;Expression={$vault.ResourceGroupName}}, @{Name=”BackupContainerName”;Expression={$container.Name}}, *
}
}# Display the backup items in Out-GridView
$backupItems | Out-GridView
$backupItems | Export-CSV VMSpecSources.csv
All SQL Databases in Backup (included deleted sources databases):
# Import necessary modules
# Import-Module Az
# Connect-AzAccount
# Login to Azure account# Get all Recovery Services Vaults
$recoveryVaults = Get-AzRecoveryServicesVault# Initialize an array to hold backup items
$backupItems = @()# Enumerate all Recovery Services Vaults
foreach ($vault in $recoveryVaults) {
# Set the context to the current vault
#Set-AzRecoveryServicesVaultContext -Vault $vault# Get all backup containers in the current vault for MSSQL
$containers = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -VaultId $vault.ID
#echo “——— containers ——–”
#echo $containers
#echo “—————————–”
# Enumerate all backup containers
foreach ($container in $containers) {
# Get all backup items in the current container for MSSQL
Set-AzRecoveryServicesVaultContext -Vault $vault
#Get-AzRecoveryServicesBackupProtectableItem -ItemType “SQLDataBase”
$items = Get-AzRecoveryServicesBackupItem -Container $container -WorkloadType MSSQL -VaultId $vault.ID
Write-Host “——— items ——–”
Write-Host “vault :” -NoNewline; Write-Host $vault.Name
Write-Host “Container:” -NoNewline; Write-Host $container.Name
Write-Host “Items: :” -NoNewline; Write-Host $items.FriendlyName
Write-Host “—————————–”# Add the backup items to the array, renaming the existing ContainerName property
$backupItems += $items | Select-Object `
@{Name=”VaultName”; Expression={$vault.Name}}, `
@{Name=”ResourceGroupName”; Expression={$vault.ResourceGroupName}}, `
@{Name=”BackupContainerName”; Expression={$container.Name}},FriendlyName,ServerName,ParentName,ParentType,LastBackupErrorDetail,ProtectedItemDataSourceId,ProtectedItemHealthStatus,ProtectionStatus,PolicyId,ProtectionState,LastBackupStatus,LastBackupTime,ProtectionPolicyName,ExtendedInfo,DateOfPurge,DeleteState,Name,Id,LatestRecoveryPoint,SourceResourceId,WorkloadType,ContainerName,ContainerType,BackupManagementType}
}# Display the backup items in Out-GridView
$backupItems | Out-GridView# Export the backup items to a CSV file
$backupItems | Export-CSV -Path “MSSQLBackupItemsAll.csv” -NoTypeInformation
All SQL Databases in Backup (databases that still exist):
# Import necessary modules
# Import-Module Az
# Connect-AzAccount
# Login to Azure account# Get all Recovery Services Vaults
$recoveryVaults = Get-AzRecoveryServicesVault# Initialize an array to hold backup items
$backupItems = @()# Enumerate all Recovery Services Vaults
foreach ($vault in $recoveryVaults) {
# Set the context to the current vault
#Set-AzRecoveryServicesVaultContext -Vault $vault# Get all backup containers in the current vault for MSSQL
$containers = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -VaultId $vault.ID
#echo “——— containers ——–”
#echo $containers
#echo “—————————–”
# Enumerate all backup containers
foreach ($container in $containers) {
# Get all backup items in the current container for MSSQL
$items = Get-AzRecoveryServicesBackupProtectableItem -Container $container -WorkloadType MSSQL -ItemType “SQLDataBase” -VaultId $vault.ID
echo “——— items ——–”
echo “vault :”+$vault.Name
echo “Container:”+$container.Name
echo “Items: :”+$items.FriendlyName
echo “—————————–”# Add the backup items to the array, renaming the existing ContainerName property
$backupItems += $items | Select-Object @{Name=”VaultName”;Expression={$vault.Name}}, @{Name=”ResourceGroupName”;Expression={$vault.ResourceGroupName}}, @{Name=”BackupContainerName”;Expression={$container.Name}}, `
FriendlyName, ProtectionState, ProtectableItemType, ParentName, ParentUniqueName, ServerName, `
IsAutoProtectable, IsAutoProtected, AutoProtectionPolicy, Subinquireditemcount, Subprotectableitemcount, `
Prebackupvalidation, NodesList, Name, Id, WorkloadType, ContainerName, ContainerType, BackupManagementType
}
}# Display the backup items in Out-GridView
$backupItems | Out-GridView# Export the backup items to a CSV file
$backupItems | Export-CSV -Path “MSSQLBackupItemsExisting.csv” -NoTypeInformation
If we query for AppID from Log Analytics, like:
MicrosoftGraphActivityLogs
| summarize NumberOfRequests=count() by AppId
| order by NumberOfRequests desc
we usually need to combine it with the Application name.
So we need to export all Enterprise Applications and App Registrations to csv from:
Do not forget about Managed Identities, you can do it by this query:
resources
| where type =~ ‘Microsoft.ManagedIdentity/userAssignedIdentities’
| project name, principalId = properties.principalId, clientId = properties.clientId
Make from all of them file AppIDList.csv like:
ApplicationName | AppID |
“VeeamM365B” | d8dff9d3-367b-4967-8a2a-f2d31c929f5d |
“P2P Server” | 39ed2d41-3e76-4505-ae68-56c02cf713c9 |
We need to upload this file to a storage account so that it is publicly accessible.
And finally, we can make a query that combines AppID with the corresponding name, so we can execute:
let ApplicationInformation = externaldata (ApplicationName: string, AppId: string, Reference: string ) [h”https://xxxx.blob.core.windows.net/xxx-allapplicationslist/xxx.csv”] with (ignoreFirstRecord=true, format=”csv”);
MicrosoftGraphActivityLogs
| summarize NumberOfRequests=count() by AppId
| lookup kind=leftouter ApplicationInformation on $left.AppId == $right.AppId
| order by NumberOfRequests desc
| project AppId, ApplicationName, NumberOfRequests
So finally we got AppID and the Application Name.
Sample Event Driven application – that uses Storage Account as a input than trigger Azure Function, uses Computer Vision and store information in CosmosDB. All with help of Event Grid.
Full source code: https://github.com/MariuszFerdyn/Build-and-deploy-serverless-apps-with-Azure-Functions-and-Azure-AI
Complete solution for using Azure Form Recognizer / Document Intelligence Studio.
Source code used in this Lab:
https://github.com/MariuszFerdyn/AzureAI-Document-Intelligence-Studio—Form-Recognizer
One of the common feature request for Azure Devops is to have a custom messages in emails. It could be great feature, but currently we have some options – so let’s see them one by one.
The code:
– task: Bash@3inputs:targetType: ‘inline’script: |echo “##vso[task.logissue type=error]Hello world!”echo “##vso[task.complete result=Failed]”
produces the following report (we see the message), and email (we see the message):
The code:
– task: Bash@3inputs:targetType: ‘inline’script: |echo “##vso[task.logissue type=warning]Hello world!”echo “##vso[task.complete result=Succeeded]”
produces the following report (we see the message), and email (we do not see the message):
The code:
– task: Bash@3inputs:targetType: ‘inline’script: |echo “##vso[task.logissue type=error]01Beginning of a group…Warning message…Error messaage…Start of a section…Debug text…Command-line being run!”echo “##vso[task.logissue type=error]02Beginning of a group…Warning message…Error messaage…Start of a section…Debug text…Command-line being run!”echo “##vso[task.logissue type=error]03Beginning of a group…Warning message…Error messaage…Start of a section…Debug text…Command-line being run!”echo “##vso[task.logissue type=error]04Beginning of a group…Warning message…Error messaage…Start of a section…Debug text…Command-line being run!”echo “##vso[task.logissue type=error]05Beginning of a group…Warning message…Error messaage…Start of a section…Debug text…Command-line being run!”echo “##vso[task.logissue type=error]06Beginning of a group…Warning message…Error messaage…Start of a section…Debug text…Command-line being run!”echo “##vso[task.logissue type=error]07Beginning of a group…Warning message…Error messaage…Start of a section…Debug text…Command-line being run!”echo “##vso[task.complete result=Succeeded]”
produces the following report (we see the message), and email (we see the message):
Learn how you can build your own copilots with Microsoft Copilot Studio. In this workshop you’ll learn how Copilots can be created for use across the business. You’ll also see how you can create custom plug ins that can integrate with custom solutions. We’ll then show you how you can use Generative AI for even more intelligent responses.
The source code used in example:
https://github.com/MariuszFerdyn/Build-your-own-Copilots-with-Microsoft-Copilot-Studio
Remediation script:
# Define the new user’s username and password
$newUsername = “mfmfmf”
$newPassword = ConvertTo-SecureString “xxxx” -AsPlainText -Force# Create the new local user
New-LocalUser -Name $newUsername -Password $newPassword -FullName “New User” -Description “This is a new user account.”# Optionally, add the user to a group (e.g., Administrators)
Add-LocalGroupMember -Group “Administrators” -Member $newUsername# Output a success message
Write-Output “User $newUsername has been created successfully.”
On machine where you see trust relationship is broken, log in using last credentials, but without network. In this way it should be possible. We are saving them to avoid storing AD credentials in Intune.
$adminUsername=”xxxx\adjoinuser”
$adminPassword=”xxx”
#$cred = New-Object PSCredential $adminUsername, ($adminPassword | ConvertTo-SecureString -AsPlainText -Force)
New-Item -ItemType Directory c:\aaaa
Get-Variable admin* | Export-Clixml c:\aaaa\vars.xml
#Import-Clixml c:\aaaa\vars.xml | %{ Set-Variable $_.Name $_.Value }
exit 1
Import-Clixml c:\aaaa\vars.xml | %{ Set-Variable $_.Name $_.Value }
#$adminUsername
#$adminPassword
$cred = New-Object PSCredential $adminUsername, ($adminPassword | ConvertTo-SecureString -AsPlainText -Force)
Test-ComputerSecureChannel -Repair -Credential $cred
You can also do this in this way, all in Intune Script but password will be stored in Intune, but without any access to affected machine
$adminUsername=”xxxx\adjoinuser”
$adminPassword=”xxx”
$cred = New-Object PSCredential $adminUsername, ($adminPassword | ConvertTo-SecureString -AsPlainText -Force)
Test-ComputerSecureChannel -Repair -Credential $cred
As you probably notice there is no GUI to export the Variables Groups, but there is a very nice API that can be called directly from your browser. So simply call
like:
It display all the Variables Groups in Azure DevOps like:
To display one Variable Group and Export it use this:
like
More info here.
VNet Flow Logs is a successor of NSG Flow Logs that works not in NSG context, but inside VNETs whats give us a better view. If you put consolidated logs to the Log Analytics Workspace there are some advantages also:
NSG Flow logs goes to AzureNetworkAnalytics_CL table that can not be exported, so can not be a part of Event Hub solution.
VNet Flow logs goes to NTANetAnalytics table, and this table can be exported to Event Hub solution.
We’re building a music recommendation service where users will be able to search and select from a set of songs, and the system will recommend similar songs to them. Below is a depiction of the architecture:
The application is composed of four different components:
The overall intention of this application is for the user to learn about vector databases. Hence the process of deploying this application is broken up into two parts.
In part one we play the role of a data scientist or ML engineer. We will familiarize ourselves with the process of generating embeddings for our song data. This part completes when we’ve stored our embeddings in our vector database.
In part two we play the role of an application engineer and turn the stored embeddings data into a recommendation service by adding a API and frontend.
Step by Step Deployment:
az login
az provider register -n Microsoft.OperationalInsights –wait &&
az provider register -n Microsoft.ServiceLinker –wait &&
az provider register -n Microsoft.App –waitexport LOCATION=westus2
export RG=music-rec-service
export ACA_ENV=music-env
export NOTEBOOK_IMAGE=mafamafa/aca-music-recommendation-notebook
export BACKEND_IMAGE=mafamafa/aca-music-recommendation-backend
export FRONTEND_IMAGE=mafamafa/aca-music-recommendation-frontend# create the resource group
az group create -l $LOCATION –name $RGaz containerapp env create –name $ACA_ENV –resource-group $RG –location $LOCATION –enable-workload-profiles
## Create the vector db add-on
az containerapp add-on qdrant create –environment $ACA_ENV –resource-group $RG –name qdrant# add a workload profile for the large Jupyter image
az containerapp env workload-profile add –name $ACA_ENV –resource-group $RG –workload-profile-type D8 –workload-profile-name bigProfile –min-nodes 1 –max-nodes 1az containerapp create –name music-jupyter –resource-group $RG –environment $ACA_ENV –image $NOTEBOOK_IMAGE –cpu 4 –memory 16.0Gi –workload-profile-name bigProfile –min-replicas 1 –max-replicas 1 –target-port 8888 –ingress external –bind qdrant
az containerapp logs show -g $RG -n music-jupyter | grep token
####Open in Portal music-jupyter url and put Token
####Start.ipnyb
####Import.ipnyb
# launch the backend application
az containerapp create –name music-backend –resource-group $RG –environment $ACA_ENV –image $BACKEND_IMAGE –cpu 4 –memory 8.0Gi –workload-profile-name bigProfile –min-replicas 1 –max-replicas 1 –target-port 8000 –ingress external –bind qdrant####http://<YOUR_ACA_ASSIGNED_DOMAIN>/songs
az containerapp create –name music-frontend –resource-group $RG –environment $ACA_ENV –image $FRONTEND_IMAGE –cpu 2 –memory 4.0Gi –min-replicas 1 –max-replicas 1 –ingress external –target-port 8080 –env-vars UI_BACKEND=https://music-backend.<YOUR_UNIQUE_ID>.westus2.azurecontainerapps.io
GPU:
# create the environment first
az containerapp env create –name $ACA_ENV –resource-group $RG –location $LOCATION –enable-workload-profiles –enable-dedicated-gpuaz containerapp create –name music-jupyter –resource-group $RG –environment $ACA_ENV –image mafamafa/aca-music-recommendation-notebook:gpu –cpu 24 –memory 48.0Gi –workload-profile-name gpu –min-replicas 1 –max-replicas 1 –target-port 8888 –ingress external –bind qdrant
Complete Microsoft Build 2024 Book of News is here.
According to the document: Here is a list of features not connected with Copilot or AI:
According to the document: Here is a list of AI and Copilot features:
Both list were generated by Copilot… So the Copilot/AI everywhere!
Mark Russinovich was founder of SysInternals company with these tools like PsExec, Sysmon and other commercial tools that companies bought and used debugging Windows. In 2000 year almost every enterprise used it. Nowadays’ Microsoft Azure CTO (Chief Technology Officer).
Today’s Mark Russinovich’s top of mind projects are:
Windows Server 2025 What’s New in Active Directory and not only:
See all of them on YouTube (Lets use Copilot to do recap):
Windows Server 2025 – try it now:
https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver
It was a total surprise for me… without any warning… just info from: https://learn.microsoft.com/en-us/microsoft-365/security/defender-endpoint/linux-whatsnew?view=o365-worldwide
July-2023 Build: 101.23062.0010 | Release version: 30.123062.0010.0
July-2023 Build: 101.23062.0010 | Release version: 30.123062.0010.0
Available in Defender for Endpoint version 101.10.72 or higher. Default is changed from real_time to passive for Endpoint version 101.23062.0001 or higher.
Interesting fix and improvement… it means that eventually the attacker will be not blocked… So if you deploy MDE after July please check your settings. The good question is how… Microsoft is talking about Ansible, Puppet, and Chef for managing the defender.
The other option can be Azure RunCommand (via Azure Devops / Azure Automation / Auzre Function):
$vm=”$(VM)”
write-host $vm
Invoke-AzVmRunCommand -ResourceGroupName “$(ResourceGroupName)” -VMName $vm -CommandId “RunPowerShellScript” -ScriptPath “$(System.DefaultWorkingDirectory)\_project\scripts\xxx.ps1”
Or the future is manage at scale by using Azure Policy Guest Configuration.
https://cloudbrothers.info/en/azure-persistence-azure-policy-guest-configuration/
Unfortunately, I was not able to find ready to use Policy. So we need to write our own (How to install the machine configuration authoring module – Azure Automanage | Microsoft Learn), what can be not so easy, but stay tuned…
Create Google Organization:
GCP Account must be a part of the organization – not a “No Organization”, you can create a new one, but it must be an owner internet domain.
You can follow:
https://workspace.google.com/gcpidentity/signup?sku=identitybasic
If you have an organization you can move the existing account to it: https://cloud.google.com/identity/docs/set-up-cloud-identity-admin#migrate-projects-and-billing-accounts-and-set-permissions
Activate Terraform API:
Just visit in GCP and activate Cloud Resource Manager API:
https://console.cloud.google.com/apis/library/cloudresourcemanager.googleapis.com
Install Terraform:
curl https://apt.releases.hashicorp.com/gpg | gpg –dearmor > hashicorp.gpg
sudo install -o root -g root -m 644 hashicorp.gpg /etc/apt/trusted.gpg.d/
sudo apt-add-repository “deb [arch=$(dpkg –print-architecture)] https:// apt.releases.hashicorp.com $(lsb_release -cs) main”
sudo apt install terraform
terraform –version
Prapare GCP to Send data to the Azure Sentinel:
mkdir pubsub
cd pubsub/
wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/GCP/Terraform/sentinel_resources_creation/GCPInitialAuthenticationSetup/GCPInitialAuthenticationSetup.tf
export GOOGLE_PROJECT=angelic-hold-403608
gcloud auth application-default login
gcloud config set project angelic-hold-403608
export GOOGLE_APPLICATION_CREDENTIALS=/root/cred.json
where cred.json – are json of credentials downloaded from service account: https://console.cloud.google.com/iam-admin/serviceaccounts
terraform init
terraform apply
In case of error:
Error: Error creating WorkloadIdentityPoolProvider: googleapi: Error 404: Requested entity was not found.
Just retry the terraform apply it is because google API is activated during first time of use.
Create App registration in Azure:
Log in to Azure and Browse to Identity > Applications > App registrations then select New registration.
Only redirect should be composed as:
The 50… value is from Terraform Output.
Create the pub-sub resources:
cd ..
mkdir pubsub2
cd pubsub2
wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/GCP/Terraform/sentinel_resources_creation/GCPAuditLogsSetup/GCPAuditLogsSetup.tf
terraform init
terraform apply
or for entire GCP organization
terraform apply -var=”organization-id={organizationId}”
Create Sentinel Connector:
Open Sentinel you want to send the logs
Open Content Hub
Select Data connectors
Install the following:
Make sure that data connectors subpage displays “GCP Pub/Sub Audit Logs data connector ingested from Sentinel’s connector”:
Configure the connector
Open Data Connector click refres select GCP Pub/Sub Audit Logs and Open Connector Page.
Click add new connector and provide details from terraform outputs.
e.g.:
After configuration it should be like:
Test de connection
After 1 hour open log analytics and issue KQL query:
GCPAuditLogs
You should see some logs from your GCP projects, like:
More info:
https://learn.microsoft.com/en-us/azure/sentinel/connect-google-cloud-platform
https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#azure
https://registry.terraform.io/providers/hashicorp/google/latest/docs/guides/getting_started
If you want to build your solution you should follow this manual for creating Codeless GCP Connectors: https://learn.microsoft.com/en-us/azure/sentinel/create-codeless-connector
The source code for the connector: https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/Google%20Cloud%20Platform%20Audit%20Logs
The solution is based on a Managed App – so be familiar with: https://rzetelnekursy.pl/azure-managed-application/
Sample Output from Terraform (it should look like):
Plan: 7 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ An_output_message = “Please copy the following values to Sentinel”
+ GCP_project_id = “angelic-hold-403608”
+ GCP_project_number = “310859431933”
+ Identity_federation_pool_id = “50ea0418683d400787fbc13c8f6b5d0b”
+ Identity_federation_provider_id = “sentinel-identity-provider”
+ Service_account_email = “sentinel-service-account@angelic-hold-403608.iam.gserviceaccount.com”
Do you want to perform these actions?
Terraform will perform the actions described above.
Only ‘yes’ will be accepted to approve.
Azure Reservations helps you save money with 1- or 3-year plans for many products. The pledge allows you to get a discount on the resources you use. Reservations can significantly reduce resource costs by up to 72% compared to pay-as-you-go pricing. Reservations provide a billing discount and do not impact the runtime state of your resources. The booking discount will be automatically applied to the matching inventory once the booking is purchased.
You can pay for your reservation in advance or in monthly installments. The total booking cost for prepaid and monthly installments is the same, and there are no additional charges for choosing monthly payments.
In short, we say that we need a given resource, e.g. a virtual machine, for 3 years and we undertake to maintain it for 3 years, or rather we undertake to pay for 3 years for which we receive a discount.
At this point, there are different strategies for selecting a reservation, the two most important ones are:
What will happen if we choose the wrong reservation? Or do we no longer need a reservation? It depends on your cloud provider…. If you pay millions for subscriptions, there will be no problem with any provider. But if your bill is up to EUR 20,000, this may be a problem.
In this case, Microsoft Azure rose to the challenge (monthly bill of up to EUR 2,000 for subscriptions) and after reporting our intention to resign, we received the following e-mail:
Which unfortunately didn’t work out with another well-known cloud service provider. As the case is ongoing, I will not say which one…
Oficjalna dokumentacja mówi
Rezerwacje platformy Azure pomagają zaoszczędzić pieniądze dzięki rocznym lub 3-letnim planom dla wielu produktów. Zobowiązanie umożliwia uzyskanie rabatu dotyczącego używanych zasobów. Rezerwacje mogą znacznie obniżyć koszty zasobów, nawet o 72% w porównaniu do cen przy płatności zgodnie z rzeczywistym użyciem. Rezerwacje umożliwiają skorzystanie z rabatu na rozliczenia i nie mają wpływu na stan środowiska uruchomieniowego Twoich zasobów. Rabat na rezerwację zostanie automatycznie zastosowany do pasujących zasobów po zakupie rezerwacji.
Za rezerwację można zapłacić z góry lub w miesięcznych ratach. Łączny koszt rezerwacji w przypadku płatności z góry i miesięcznych rat jest taki sam, a wybór płatności miesięcznych nie pociąga za sobą dodatkowych opłat.
Mówiąc, krótko mówimy, że potrzebujemy dany zasób np. maszynę wirtualną na 3 lata i zobowiązujemy się ja utrzymać przez 3 lata a raczej zobowiązujemy się płacić przez 3 lata za co otrzymujemy rabat.
W tym momencie różne są strategie dobrania rezerwacji, dwie najważniejsze:
Co się stanie jeżeli jednak dobierzemy złą rezerwacje? Albo już więcej nie potrzebujemy rezerwacji? To już zależy od Twojego dostawcy chmury…. Jeżeli płacisz miliony za subskrypcje – z każdym z dostawców nie będzie problemu. Ale jeżeli twój rachunek jest do 20k Euro – to już może być problem.
W tym przypadku – Microsoft Azure stanął na wysokości zadania (miesięczny rachunek do 2000 euro za subskrypcje) i po zgłoszeniu chęci rezygnacji dostaliśmy takiego maila:
Co niestety nie udało się z innym znany dostawcą usług chmurowych. Jako, że sprawa w toku nie podam jakim….
Official documentation says
In this lab, you will get a hands-on experience and learn how you can get started building plug-ins for Microsoft 365 Copilot.
Exercise 1 – Download Source Code and Install and set up Teams Toolkit for Visual Studio Code
Exercise 2 – Run sample app
Exercise 3 – Run the app in Microsoft Copilot for Microsoft 365
Follow the source-code.
In this lab, you earn how to automate testing for cloud-native applications using our sample app Contoso Traders.
You can see, how to protect and classify your sensitive data in Microsoft 365.
Exercise 1 – UI tests with Playwright – Great tool
Exercise 2 – Azure Load Testing
Exercise 3 – Azure Chaos Studio
Follow the source-code.
In this lab, you see steps how to incorporate Kubernetes, Dapr, KEDA, Bicep, using Github Copilot to sample app.
You can see, how to protect and classify your sensitive data in Microsoft 365.
Exercise 1 – Explore GitHub Codespaces
Exercise 2 – Run PetSpotR in a GitHub Codespace
Exercise 3 – Use GitHub Copilot to add Dapr to the frontend
Exercise 4 – Use Bicep to model your infrastructure as code
Follow the source-code.
In this lab, you see steps on the right and executions. No comments, so must get knowledge from external sources, e.g. step by step on GitHub.
You can see, how to protect and classify your sensitive data in Microsoft 365.
Exercise 1 – Manage Compliance Roles
Exercise 2 – Manage Sensitive Information Types
Exercise 3 – Manage Sensitivity Labels
Exercise 4 – Manage DLP Policies
Exercise 5 – Configure Insider Risk Management
Follow: https://github.com/MicrosoftLearning/SC-400T00A-Microsoft-Information-Protection-Administrator/tree/master/Instructions/Demos/Ignite%202023
Azure Active Directory B2C can be a great solution as a Identity Provider. Here is a quick PowerShell Script that export all users from Azure Active Directory B2C.
#https://github.com/cljung/AzureAD-B2C-scripts
$ApplicationID = “xxx”
$TenatDomainName = “xxx”
$AccessSecret = “xxx”
$Body = @{
Grant_Type = “client_credentials”
Scope = “https://graph.microsoft.com/.default”
client_Id = $ApplicationID
Client_Secret = $AccessSecret
}$ConnectGraph = Invoke-RestMethod -Uri “https://login.microsoftonline.com/$TenatDomainName/oauth2/v2.0/token” -Method POST -Body $Body
$token = $ConnectGraph.access_token
$GrapUrl = ‘https://graph.microsoft.com/v1.0/users/?$select=id,displayName,mail,otherMails,EmailAddresses’
$GrapUrl = ‘https://graph.microsoft.com/v1.0/applications/fea0ec14f6364d3790b1c72b82bd0a00/extensionProperties’
$GrapUrl = ‘https://graph.microsoft.com/v1.0/applications’Write-Host ‘———————————————————————————————————————————————–‘
Write-Host ‘—————————————————– B2C App Id ——————————————————————————‘
Write-Host ‘———————————————————————————————————————————————–‘
Write-Host ‘ — AppName — ‘
(Invoke-RestMethod -Headers @{Authorization = “Bearer $($token)”} -Uri $GrapUrl -Method Get).value.displayName+” — appID — “+(Invoke-RestMethod -Headers @{Authorization = “Bearer $($token)”} -Uri $GrapUrl -Method Get).value.appId+” — Id — “+(Invoke-RestMethod -Headers @{Authorization = “Bearer $($token)”} -Uri $GrapUrl -Method Get).value.Id
Write-Host ‘———————————————————————————————————————————————–‘
Write-Host ‘—————————— The Custom fields taken from b2c-extensions-app Id (last outputs) not appId ————————————‘
Write-Host ‘———————————————————————————————————————————————–‘
$GrapUrl = ‘https://graph.microsoft.com/v1.0/applications/2834a576-f992-44ab-b5f5-31703ba491f1/extensionProperties’
(Invoke-RestMethod -Headers @{Authorization = “Bearer $($token)”} -Uri $GrapUrl -Method Get).value
Write-Host ‘———————————————————————————————————————————————–‘
Write-Host ‘————————————————– All Users —————————————————————————‘
Write-Host ‘———————————————————————————————————————————————–‘
$GrapUrl = ‘https://graph.microsoft.com/v1.0/users/?$select=identities,displayName,mail,otherMails,id,userType,creationType,accountEnabled,createdDateTime,creationType,lastPasswordChangeDateTime,mailNickname,refreshTokensValidFromDateTime,signInSessionsValidFromDateTime,displayName,extension_a0a23b3b4e404f2ba6e711d151e13811_Level1PlayerWeight,extension_a0a23b3b4e404f2ba6e711d151e13811_Level1PlayerHeight,extension_a0a23b3b4e404f2ba6e711d151e13811_Level1PlayerSchool’
(Invoke-RestMethod -Headers @{Authorization = “Bearer $($token)”} -Uri $GrapUrl -Method Get).value|Format-List
The better view can be in GitHub.
You can also create an Excel with Microsoft Graph queries to display them as here.
Next step is PowerBI report with all users.
Some Microsoft docs.
https://jwt.ms page can be very helpfully in debugging.
When you create a new Ubuntu VM instance and you try to connect via web browser you can see:
SSH authentication has failed
In logs you can see:
google_guest_agent[734]: Creating user admin.
google_guest_agent[734]: ERROR non_windows_accounts.go:144 Error creating user: useradd: group admin exists – if you want to add this user to that group, use -g..
The solution is just add Automation script with the following:
#! /bin/bash
useradd -m -G sudo mf
echo ‘mf:Pa##w0rd’ | chpasswd
sed -i “/^[^#]*PasswordAuthentication[[:space:]]no/c\PasswordAuthentication yes” /etc/ssh/sshd_config
service sshd restart
Set-AzVirtualNetworkGatewayDefaultSite -GatewayDefaultSite $LocalGateway -VirtualNetworkGateway $VirtualGateway
Sets the force tunneling from Azure to on-premise. But just after sometime you lost connection at all. The solution can be just restart the virtual network gateway double, even if you see traffic selectors mismatch.
More info: https://learn.microsoft.com/en-us/azure/vpn-gateway/site-to-site-tunneling
After you configure your first ADFS application like here:
You can Do it as a multi tenant application using this scenario:
Step by step described here:
https://blog.matrixpost.net/creating-an-ad-fs-federation-trust-between-two-organizations/