http2 – Azure DevOps Server 2020 and http/2

I’ve set up https on our Azure DevOps Server 2020.0.1 and want IIS to serve the website over http/2. When browsing the website with a Chromium based browser (Chrome, Edge) all content is served over http/1.1. When browsing with Firefox static content of the website is served over http/2 and api generated content is served over http/1.1.

I want that all content in all mentioned browsers is served over http/2. Is that possible?

OS is Windows Server 2016.

How do you make structural changes to Azure SQL Server DBs when you don’t have access via SSMS?

We’re no longer allowed to have access to UAT/PROD environments via SSMS at my organization. The way our deployment process works is tied to git pushes. So when you push to the DEV branch, the DEV web code and DB is updated, likewise when pushing to the QA branch and the UAT branch.

The problem is when there is a structural change to the DB, very often the deployment fails with the error data loss may occur. In the past, with on-prem solutions, when we would publish the DB, if we encountered the data loss may occur error we could uncheck Block incremental deployment if data loss might occur in Visual Studio and the deployment would work. We never incurred any data loss either. Since this option is no longer available, it was suggested we use pre and post deployment scripts.

What do I need to put in the pre and post deployment scripts to prevent the data loss error? Our Visual Studio DB Project already contains all the table/view/SP/function definitions.

authorization – How to mitigate risk of spoofing / Impersonating in OAuth Device flow ( device code flow ) in Azure AD?

I have developed C# application and hosted it as a windows service on a machine http://localhost:5000 . This application registered in `Azure Active Directory

Application is using the below details in-app configuration

"ClientId": "242429ea-xxxx-4ddb-xxxx-xxxxxxxxxxxxx",
"Tenant": "67ss7s7s7s-4e27-beee-yyyyyyyyyyyy",
"Scope": "api://12121212-5600-xxxx-1111-123456789/IoTGateway",

Application receives a Token from AAD and which will be used by User for authenticating (OAuth Device flow in Azure AD, sometimes called device code flow)

Question

Currently, all the employees of the company registered in AD, and frustrated employees who copy the application configuration values can get access by SPOOFIING the application. This is a risk. How to mitigate this?

Note: Attacker can shut down this application and run his own spoofed application at the same port 5000.

enter image description here

Is it possible to create a security group and add only users who are supposed to have access to this application?

Example

AD All Users
User 1
User 2
User 3
AD Sec Group 1
User 1
User 2

So user 3 even after having the secret, he shall reject the request by AAD. Is it possible?

authorization – How to mitigate risk of spoofing in OAuth Device flow ( device code flow ) in Azure AD?

I have developed C# application and hosted it as a windows service on a machine http://localhost:5000 . This application registered in `Azure Active Directory

Application is using the below details in-app configuration

"ClientId": "242429ea-xxxx-4ddb-xxxx-xxxxxxxxxxxxx",
"Tenant": "67ss7s7s7s-4e27-beee-yyyyyyyyyyyy",
"Scope": "api://12121212-5600-xxxx-1111-123456789/IoTGateway",

Application receives a Token from AAD and which will be used by User for authenticating (OAuth Device flow in Azure AD, sometimes called device code flow)

Question

Currently, all the employees of the company registered in AD, and frustrated employees who copy the application configuration values can get access by SPOOFIING the application. This is a risk. How to mitigate this?

Note: Attacker can shut down this application and run his own spoofed application at the same port 5000.

enter image description here

Is it possible to create a security group and add only users who are supposed to have access to this application?

Can I make a short-lived readonly snapshot for reporting in Azure SQL DB?

I have an Azure SQL DB (OLTP), that is under considerable load, let’s call it AppDB. AppDB is transactionally consistent, but new rows are written to it constantly. Now I need to populate ReportingDB based on AppDB state every 30 minutes. Reporting population job runs several moderately big queries against AppDB, those queries, unfortunately, can’t be wrapped in a transaction but still have to all run on consistent data. That is, I can’t have situation Query 1 runs=> new rows inserted into AppDB => Query 2 runs. All my queries have to see data the way it was when Query 1 started.

I was hoping I can use Snapshots to create a read-only snapshot for Reporting job to use. From description, creation of such a snapshot should be fast and subsequent “copy on write” performance hit should be manageable. Snapshots lifetime would be under 10 minutes on average.

But now it looks like Azure SQL does not support CREATE DATABASE ... AS SNAPSHOT OF ..., it only supports CREATE DATABASE ... AS COPY OF ..., which I expect to be a lot slower (probably not meant to be used for reporting snapshot).

What would be a good way to create quick and short-lived read-only snapshots for reporting in Azure SQL DB?

P.S. We have considered replication, but it is not an option for us at the moment due to policy limitations.

vpn – Does azurevpnconfig (Azure) contains security-relevant information?

I created a new VPN gateway inside my Azure environment (OpenVPN SSL with AD authentication).
After that, I downloaded the profile/config file which contains azurevpnconfig.
With this file, the program “Azure VPN client” and my AD credentials, I can connect successfully from my client to this VPN gateway.

Inside the azurevpnconfig there is some information like the tenant id of my AD and also the serversecret. Is it safe to share this file with other users around the world or can this be a security issue to share this information?

Incremento del almacenamiento del App Service Plan – Azure

estoy aprendiendo Azure y tengo una duda entorno a los App Service Plans.
Entiendo que un App Service Plan dispone de una cantidad de memoria RAM por cada instancia de máquina virtual.
Sin embargo, en lo referente al espacio de almacenamiento, tengo la duda de si cada vez que creamos una instancia de máquina virtual está consumiendo espacio de disco en el plan.
Por ejemplo, tengo una aplicación que consume 1 GB con una instancia de MV. ¿Si escalamos este plan horizontalmente y creamos 3 instancias de MV ocuparán 3GB de espacio de disco?
Yo entiendo las instancias como copias del contenido del Plan por lo tanto entiendo que por cada instancia se multiplica por dos el consumo de espacio en disco. ¿Es así?

Muchas gracias por vuestras respuestas,

Un saludo.

How to create custom variables in Azure DevOps pipeline to track versions?

I am looking to create a widget on Azure DevOps Dashboard, or a standalone custom application that will essentially be a table holding specific data. We want to display what code is deployed to what environment and the version of it, i.e. v1.0.0.

It would look something like:
ProjectName | QA | v1.0.0

I was thinking of performing a GET request to the REST Api, adding the correct values to a table and pushing the extension to Azure DevOps dashboard. The only issue is that any changes to an extension and it needs to be deleted, and pushed to the extension marketplace again.

I was also looking at creating custom variables for the version or environment and adding it to the Pipeline, which I think would be more ideal.

Is there a way to display pipeline information, like a custom variable (version number) to Azure DevOps dashboard?

How should I handle Azure SQL hanging when scaling up from General Purpose to Hyperscale?

I’m in the process of scaling up an Azure SQL database from General Purpose to Hyperscale. This has been running for more than 12 hours. When I check the “Ongoing operations” it says that it is “Scaling database performance” and “Progress: 0%”.

I’m not sure if I should wait for it to complete, or click the “Cancel this operation” and try another approach. How should I handle Azure SQL hanging when scaling up from General Purpose to Hyperscale?