Quantcast
Channel: Code Inside Blog
Viewing all 357 articles
Browse latest View live

Check installed version for ASP.NET Core on Windows IIS with Powershell

$
0
0

The problem

Let’s say you have a ASP.NET Core application without the bundled ASP.NET Core runtime (e.g. to keep the download as small as possible) and you want to run your ASP.NET Core application on a Windows Server hosted by IIS.

General approach

The general approach is the following: Install the .NET Core hosting bundle and you are done.

Each .NET Core Runtime (and there are quite a bunch of them) is backward compatible (at least the 2.X runtimes), so if you have installed 2.2.6, your app (created while using the .NET runtime 2.2.1), still runs.

Why check the minimum version?

Well… in theory the app itself (at least for .NET Core 2.X applications) may run under runtime versions, but each version might fix something and to keep things safe it is a good idea to enforce security updates.

Check for minimum requirement

I stumbled upon this Stackoverflow question/answer and enhanced the script, because that version only tells you “ASP.NET Core seems to be installed”. My enhanced version searchs for a minimum required version and if this is not installed, it exit the script.

$DotNetCoreMinimumRuntimeVersion = [System.Version]::Parse("2.2.5.0")

$DotNETCoreUpdatesPath = "Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Updates\.NET Core"
$DotNetCoreItems = Get-Item -ErrorAction Stop -Path $DotNETCoreUpdatesPath
$MinimumDotNetCoreRuntimeInstalled = $False

$DotNetCoreItems.GetSubKeyNames() | Where { $_ -Match "Microsoft .NET Core.*Windows Server Hosting" } | ForEach-Object {

                $registryKeyPath = Get-Item -Path "$DotNETCoreUpdatesPath\$_"

                $dotNetCoreRuntimeVersion = $registryKeyPath.GetValue("PackageVersion")

                $dotNetCoreRuntimeVersionCompare = [System.Version]::Parse($dotNetCoreRuntimeVersion)

                if($dotNetCoreRuntimeVersionCompare -ge $DotNetCoreMinimumRuntimeVersion) {
                                Write-Host "The host has installed the following .NET Core Runtime: $_ (MinimumVersion requirement: $DotNetCoreMinimumRuntimeVersion)"
                                $MinimumDotNetCoreRuntimeInstalled = $True
                }
}

if ($MinimumDotNetCoreRuntimeInstalled -eq $False) {
                Write-host ".NET Core Runtime (MiniumVersion $DotNetCoreMinimumRuntimeVersion) is required." -foreground Red
                exit
}

The “most” interesting part is the first line, where we set the minimum required version.

If you have installed a version of the .NET Core runtime on Windows, this information will end up in the registry like this:

x

Now we just need to compare the installed version with the existing version and know if we are good to go.

Hope this helps!


Enforce Administrator mode for builded dotnet exe applications

$
0
0

The problem

Let’s say you have a .exe application builded from Visual Studio and the application always needs to be run from an administrator account. Windows Vista introduced the “User Account Control” (UAC) and such applications are marked with a special “shield” icon like this:

x

TL;DR-version:

To build such an .exe you just need to add a __“application.manifest” and request the needed permission like this:

<requestedExecutionLevel  level="requireAdministrator" uiAccess="false" />

Step by Step for .NET Framework apps

Create your WPF, WinForms or Console project and add a application manifest file:

x

The file itself has quite a bunch of comments in it and you just need to replace

<requestedExecutionLevel level="asInvoker" uiAccess="false" />

with

<requestedExecutionLevel  level="requireAdministrator" uiAccess="false" />

… and you are done.

Step by Step for .NET Core apps

The same approach works more or less for .NET Core 3 apps:

Add a “application manifest file”, change the requestedExecutionLevel and it should “work”

Be aware: For some unkown reasons the default name for the application manifest file will be “app1.manifest”. If you rename the file to “app.manifest”, make sure your .csproj is updated as well:

<Project Sdk="Microsoft.NET.Sdk"><PropertyGroup><OutputType>Exe</OutputType><TargetFramework>netcoreapp3.0</TargetFramework><ApplicationManifest>app.manifest</ApplicationManifest></PropertyGroup></Project>

Hope this helps!

View the source code on GitHub.

IdentityServer & Azure AD Login: Unkown Response Type text/html

$
0
0

The problem

Last week we had some problems with our Microsoft Graph / Azure AD login based system. From a user perspective it was all good until the redirect from the Microsoft Account to our IdentityServer.

As STS and for all auth related stuff we use the excellent IdentityServer4.

We used the following configuration:

services.AddAuthentication()
            .AddOpenIdConnect(office365Config.Id, office365Config.Caption, options =>
            {
                options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
                options.SignOutScheme = IdentityServerConstants.SignoutScheme;
                options.ClientId = office365Config.MicrosoftAppClientId;            // Client-Id from the AppRegistration 
                options.ClientSecret = office365Config.MicrosoftAppClientSecret;    // Client-Secret from the AppRegistration 
                options.Authority = office365Config.AuthorizationEndpoint;          // Common Auth Login https://login.microsoftonline.com/common/v2.0/ URL is preferred
                options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = false }; // Needs to be set in case of the Common Auth Login URL
                options.ResponseType = "code id_token";
                options.GetClaimsFromUserInfoEndpoint = true;
                options.SaveTokens = true;
                options.CallbackPath = "/oidc-signin"; 
                foreach (var scope in office365Scopes)
                {
                    options.Scope.Add(scope);
                }
            });

The “office365config” contains the basic OpenId Connect configuration entries like ClientId and ClientSecret and the needed scopes.

Unfortunatly with this configuration we couldn’t login to our system, because after we successfully signed in to the Microsoft Account this error occured:

System.Exception: An error was encountered while handling the remote login. ---> System.Exception: Unknown response type: text/html
   --- End of inner exception stack trace ---
   at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
   at IdentityServer4.Hosting.FederatedSignOut.AuthenticationRequestHandlerWrapper.HandleRequestAsync() in C:\local\identity\server4\IdentityServer4\src\IdentityServer4\src\Hosting\FederatedSignOut\AuthenticationRequestHandlerWrapper.cs:line 38
   at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
   at Microsoft.AspNetCore.Cors.Infrastructure.CorsMiddleware.InvokeCore(HttpContext context)
   at IdentityServer4.Hosting.BaseUrlMiddleware.Invoke(HttpContext context) in C:\local\identity\server4\IdentityServer4\src\IdentityServer4\src\Hosting\BaseUrlMiddleware.cs:line 36
   at Microsoft.AspNetCore.Server.IIS.Core.IISHttpContextOfT`1.ProcessRequestAsync()

Fix

After some code research I found the problematic code:

We just needed to disable “GetClaimsFromUserInfoEndpoint” and everything worked. I’m not sure why we the error occured, because this code was more or less untouched a couple of month and worked as intended. I’m not even sure what “GetClaimsFromUserInfoEndpoint” really does in the combination with a Microsoft Account.

I wasted one or two hours with this behavior and maybe this will help someone in the future. If someone knows why this happend: Use the comment section or write me an email :)

Full code:

   services.AddAuthentication()
                .AddOpenIdConnect(office365Config.Id, office365Config.Caption, options =>
                {
                    options.SignInScheme = IdentityServerConstants.ExternalCookieAuthenticationScheme;
                    options.SignOutScheme = IdentityServerConstants.SignoutScheme;
                    options.ClientId = office365Config.MicrosoftAppClientId;            // Client-Id from the AppRegistration 
                    options.ClientSecret = office365Config.MicrosoftAppClientSecret;  // Client-Secret from the AppRegistration 
                    options.Authority = office365Config.AuthorizationEndpoint;        // Common Auth Login https://login.microsoftonline.com/common/v2.0/ URL is preferred
                    options.TokenValidationParameters = new TokenValidationParameters { ValidateIssuer = false }; // Needs to be set in case of the Common Auth Login URL
                    options.ResponseType = "code id_token";
                    // Don't enable the UserInfoEndpoint, otherwise this may happen
                    // An error was encountered while handling the remote login. ---> System.Exception: Unknown response type: text/html
                    // at Microsoft.AspNetCore.Authentication.RemoteAuthenticationHandler`1.HandleRequestAsync()
                    options.GetClaimsFromUserInfoEndpoint = false; 
                    options.SaveTokens = true;
                    options.CallbackPath = "/oidc-signin"; 
                    foreach (var scope in office365Scopes)
                    {
                        options.Scope.Add(scope);
                    }
                });

Hope this helps!

Did you know that you can build .NET Core apps with MSBuild.exe?

$
0
0

The problem

We recently updated a bunch of our applications to .NET Core 3.0. Because of the compatibility changes to the “old framework” we try to move more and more projects to .NET Core, but some projects still target “.NET Framework 4.7.2”, but they should work “ok-ish” when used from .NET Core 3.0 applications.

The first tests were quite successful, but unfortunately when we tried to build and pulish the updated .NET Core 3.0 app via ‘dotnet publish’ (with a reference to a .NET Framework 4.7.2 app) we faced this error:

C:\Program Files\dotnet\sdk\3.0.100\Microsoft.Common.CurrentVersion.targets(3639,5): error MSB4062: The "Microsoft.Build.Tasks.AL" task could not be loaded from the assembly Microsoft.Build.Tasks.Core, Version=15.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a.  Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. 

The root cause

After some experiments we saw a pattern:

Each .NET Framework 4.7.2 based project with a ‘.resx’ file would result in the above error.

The solution

‘.resx’ files are still a valid thing to do, so we checked out if we could work around this problem, but unfortunately this was not super successful. We moved some resources, but in the end some resources must stay in the corresponding file.

We used the ‘dotnet publish…’ command to build and publish .NET Core based applications, but then I tried to build the .NET Core application from MSBuild.exe and discovered that this worked.

Lessons learned

If you have a mixed environment with “old” .NET Framework based applications with resources in use and want to use this in combination with .NET Core: Try to use the “old school” MSBuild.exe way.

MSBuild.exe is capable of building .NET Core applications and it is more or less the same.

Be aware

Regarding ASP.NET Core applications: The ‘dotnet publish’ command will create a web.config file - if you use the MSBuild approach this file will not be created automatically. I’m not sure if there is a hidden switch, but if you just treat .NET Core apps like .NET Framework console applications the web.config file is not generated. This might lead to some problems when you deploy this to an IIS.

Hope this helps!

T-SQL Pagination

$
0
0

The problem

This is pretty trivial: Let’s say you have blog with 1000 posts in your database, but you only want to show 10 entries “per page”. You need to find a way how to slice this dataset into smaller pieces.

The solution

In theory you could load everything from the database and filter the results “in memory”, but this would be quite stupid for many reasons (e.g. you load much more data then you need and the computing resources could be used for other requests etc.).

If you use plain T-SQL (and Microsoft SQL Server 2012 or higher) you can express a query with paging like this:

SELECT * FROM TableName ORDER BY id OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY;

Read it like this: Return the first 10 entries from the table. To get the next 10 entries use OFFSET 10 and so on.

If you use the Entity Framework (or Entity Framework Core or any other O/R-Mapper) the chances are high they do exact the same thing internally for you.

Currently all “supported” SQL Servers support this syntax nowadays. If you try this syntax on SQL Server 2008 or SQL Server 2008 R2 you will receive a SQL error

Links

Checkout the documentation for further information.

This topic might seem “simple”, but during my developer life I was suprised how “hard” paging was with SQL Server. Some 10 years ago (… I’m getting old!) I was using MySQL and the OFFSET and FETCH syntax was introduced with Microsoft SQL Server 2012. This Stackoverflow.com Question shows the different ways how to implement it. The “older” ways are quite weird and complicated.

I also recommend this blog for everyone who needs to write T-SQL.

Hope this helps!

Accessibility Insights: Spot accessibilities issues easily for Web Apps and Windows Apps

$
0
0

Accessibility

Accessibility is a huge and important topic nowadays. Keep in mind that in some sectors (e.g. government, public service etc.) accessibility is a requirement by law (in Europe the European Standards EN 301 549).

If you want to learn more about accessibility in general this might be handy: MDN Web Docs: What is accessibility?

Tooling support

In my day to day job for OneOffixx I was looking for a good tool to spot accessibility issues in our Windows and Web App. I knew that there must be some good tools for web development, but was not sure about Windows app support.

Accessibility itself has many aspects, but these were some non obvious key aspects in our application that we needed to address:

  • Good contrasts: This one is easy to understand, but sometimes some colors or hints in the software didn’t match the required contrast ratios. High contrast modes are even harder.
  • Keyboard navigation: This one is also easy to understand, but can be really hard. Some elements are nice to look at, but hard to focus with pure keyboard commands.
  • Screen reader: After your application can be navigated with the keyboard you can checkout screen reader support.

Accessibility Insights

Then I found this app from Microsoft: Accessibility Insights

x

The tool scans active applications for any accessibility issues. Side node: The UX is a bit strange, but OK - you get used to it.

Live inspect:

The starting point is to select a window or a visible element on the screen and Accessibility Insights will highlight it:

x

Then you can click on “Test”, which gives you detailed test result:

x

(I’m not 100% if each error is really problematic, because a lot of Microsofts very own applications have many issues here.)

Tab Stops:

As already written: Keyboard navigation is a key aspect. This tool has a nice way to visualize “Tab” navigation and might help you to better understand the navigation with a keyboard:

x

Contrasts:

The third nice helper in Accessibility Insights is the contrast checker. It highlights contrast issues and has an easy to use color picker integrated.

x

Behind the scenes this tool uses the Windows Automation API / Windows UI Automation API.

Accessibility Insights for Chrome

Accessibility Insights can be used in Chrome (or Edge) as well to check web apps. The extension is similar to the Windows counter part, but has a much better “assessment” story:

x

x

x

Summary

This tool was really a time saver. The UX might not be the best on Windows, but it gives you some good hints. After we discovered this app for our Windows Application we used the Chrome version for our Web Application as well.

If you use or used other tools in the past: Please let me know. I’m pretty sure there are some good apps out there to help build better applications.

Hope this helps!

TLS/SSL problem: 'Could not create SSL/TLS secure channel'

$
0
0

Problem

Last week I had some fun debugging a weird bug. Within our application one module makes HTTP requests to a 3rd party service and depending on the running Windows version this call worked or failed with:

'Could not create SSLTLS secure channel'

I knew that older TLS/SSL versions are deprecated and that many services refuse those protocols, but we still didn’t finally understand the issue:

  • The HTTPS call worked without any issues on a Windows 10 1903 machine
  • The HTTPS call didn’t work on a Windows 7 SP1 (yeah… customers…) and a Windows 10 1803 machine.

Our software uses the .NET Framework 4.7.2 and therefore I thought that this should be enough.

Root cause

Both systems (or at least they represents two different customer enviroments) didn’t enable TLS 1.2.

On Windows 7 (and I think on the older Windows 10 releases) there are multiple ways. On way is to set a registry key to enable the newer protocols.

Our setup was a bit more complex than this and I needed like a day to figure everything out. A big mystery was, that some services were accessible even under the old systems till I figured out, that some sites even support a pure HTTP connection without any TLS.

Well… to summarize it: Keep your systems up to date. If you have any issues with TLS/SSL make sure your system does support it.

Hope this helps!

Escape enviroment variables in MSIEXEC parameters

$
0
0

Problem

Customers can install our product on Windows with a standard MSI package. To automate the installation administrators can use MSIEXEC and MSI parameters to configure our client.

A simple installation can look like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/OneOffixx/"

The “CACHEFOLDER” parameter will be written in the .exe.config file and our program will read it and stores offline content under the given location.

So far, so good.

For Terminal Server installations or “multi-user” scenarios this will not work, because each cache is bound to a local account. To solve this we could just insert the “%username%” enviroment variable, right?

Well… no… at least not with the obvious call, because this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/%username%/OneOffixx/"

will result in a call like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/admin/OneOffixx/"

Solution

I needed a few hours and some Google-Fu to found the answer.

To “escape” those variables we need to invoke it like this:

msiexec /qb /i "OneOffixx.msi" ... CACHEFOLDER="D:/%%username%%/OneOffixx/"

Be aware: This stuff is a mess. It depends on your scenario. Checkout this Stackoverflow answer to learn more. The double percent did the trick for us, so I guess it is “ok-ish”.

Hope this helps!


Blazor for Office Add-ins: First look

$
0
0

Last week I did some research and tried to build a pretty basic Office Addin (within the “new” web based Addin model) with Blazor.

Side note: Last year I blogged about how to build Office Add-ins with ASP.NET Core.

Why Blazor?

My daily work home is in the C# and .NET land, so it would be great to use Blazor for Office Addins, right? A Office Add-in is just a web application with a “communication tunnel” to the hosting Office application - not very different from the real web.

What (might) work: Serverside Blazor

My first try was with a “standard” serverside Blazor application and I just pointed the dummy Office Add-in manifest file to the site and it (obviously) worked:

I assume that serverside Blazor is for the client not very “complicated” and it would probably work.

After my initial tweet Manuel Sidler jumped in and made a simple demo project, which also invokes the Office.js APIs from C#!

Checkout his repository on GitHub for further information.

What won’t work: WebAssembly (if I don’t miss anything)

Serverside Blazor is cool, but has some problems (e.g. a server connection is needed and scaling is not that easy) - what about WebAssembly?

Well… Blazor WebAssembly is still in preview and I tried the same setup that worked for serverside blazor.

Result:

The desktop PowerPoint (I tried to build a PowerPoint addin) keeps crashing after I add the addin. On Office Online it seems to work, but not for a very long time:

Possible reasons:

The default Blazor WebAssembly installs a service worker. I removed that part, but I’m not 100% sure if I did it correctly. At least they are currently not supported from the Office Add-in Edge WebView. My experience with Office Online and the Blazor addin failed as well and I don’t think that service workers are the problem.

I’m not really sure why its not working, but its quite early for Blazor WebAssembly, so… time will tell.

What does the Office Dev Team think of Blazor?

Currently I just found one comment on this blogpost regarding Blazor:

Will Blazor be supported for Office Add-ins?

No, it will be a React Office.js add-in. We don’t have any plans to support Blazor yet. For that, please put a note on our UserVoice channel: https://officespdev.uservoice.com. There are several UserVoice items already on this, so know that we are listening to your feedback and prioritizing based on customer requests. The more requests we get for particular features, the more we will consider moving forward with developing it. 

Well… vote for it! ;)

SqlBulkCopy for fast bulk inserts

$
0
0

Within our product OneOffixx we can create a “full export” from the product database. Because of limitations with normal MS SQL backups (e.g. compatibility with older SQL databases etc.), we created our own export mechanic. An export can be up to 1GB and more. This is nothing to serious and far from “big data”, but still not easy to handle and we had some issues to import larger “exports”. Our importer was based on a Entity Framework 6 implementation and it was really slow… last month we tried to resolve this and we are quite happy. Here is how we did it:

TL;DR Problem:

Bulk Insert with a Entity Framework based implementation is really slow. There is at least one NuGet package, which seems to help, but unfortunately we run into some obscure issues. This Stackoverflow question highlights some numbers and ways of doing it.

SqlBulkCopy to the rescure:

After my failed attempt to tame our EF implementation I discovered the SqlBulkCopy operation. In .NET (Full Framework and .NET Standard!) the usage is simple via the “SqlBulkCopy” class.

Our importer looks more or less like this:

using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew, TimeSpan.FromMinutes(30), TransactionScopeAsyncFlowOption.Enabled))
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(databaseConnectionString))
    {
    var dt = new DataTable();
    dt.Columns.Add("DataColumnA");
    dt.Columns.Add("DataColumnB");
    dt.Columns.Add("DataColumnId", typeof(Guid));

    foreach (var dataEntry in data)
    {
        dt.Rows.Add(dataEntry.A, dataEntry.B, dataEntry.Id);
    }

    sqlBulk.DestinationTableName = "Data";
    sqlBulk.AutoMapColumns(dt);
    sqlBulk.WriteToServer(dt);

    scope.Complete();
    }

public static class Extensions
    {
        public static void AutoMapColumns(this SqlBulkCopy sbc, DataTable dt)
        {
            sbc.ColumnMappings.Clear();

            foreach (DataColumn column in dt.Columns)
            {
                sbc.ColumnMappings.Add(column.ColumnName, column.ColumnName);
            }
        }
    }       

Some notes:

  • The TransactionScope is not required, but still nice.
  • The SqlBulkCopy instance just needs the databaseConnectionString.
  • A Datatable is needed and (I’m not sure why) all non crazy SQL datatypes are magically supported, but GUIDs needs to be typed explicitly.
  • Insert thousands of data in your dataTable, point the SqlBulkCopy to your destination table, map those columns and write the to the server.
  • You can use the same instance for multiple bulk operations.
  • There is also an Async implementation available.

Only “downside”: SqlBulkCopy is a table by table insert. You need to insert your data in the correct order if you have any db constraints in your schema.

Result:

We reduced the import from several minutes to seconds :)

Hope this helps!

Can a .NET Core 3.0 compiled app run with a .NET Core 3.1 runtime?

$
0
0

Within our product we move more and more stuff in the .NET Core land. Last week we had a disussion around needed software requirements and in the .NET Framework land this question was always easy to answer:

.NET Framework 4.5 or higher.

With .NET Core the answer is sligthly different:

In theory major versions are compatible, e.g. if you compiled your app with .NET Core 3.0 and a .NET Core runtime 3.1 is the only installed 3.X runtime on the machine, this runtime is used.

This system is called “Framework-dependent apps roll forward” and sounds good.

The bad part

Unfortunately this didn’t work for us. Not sure why, but our app refuses to work because a .dll is not found or missing. The reason is currently not clear. Be aware that Microsoft has written a hint that such things might occure:

It’s possible that 3.0.5 and 3.1.0 behave differently, particularly for scenarios like serializing binary data.

The good part

With .NET Core we could ship the framework with our app and it should run fine wherever we deploy it.

Summery

Read the docs about the “app roll forward” approach if you have similar concerns, but test your app with that combination.

As a sidenote: 3.0 is not supported anymore, so it would be good to upgrade it to 3.1 anyway, but we might see a similar pattern with the next .NET Core versions.

Hope this helps!

EWS, Exchange Online and OAuth with a Service Account

$
0
0

This week we had a fun experiment: We wanted to talk to Exchange Online via the “old school” EWS API, but in a “sane” way.

But here is the full story:

Our goal

We wanted to access contact information via a web service from the organization, just like the traditional “Global Address List” in Exchange/Outlook. We knew that EWS was on option for the OnPrem Exchange, but what about Exchange Online?

The big problem: Authentication is tricky. We wanted to use a “traditional” Service Account approach (think of username/password). Unfortunately the “basic auth” way will be blocked in the near future because of security concerns (makes sense TBH). There is an alternative approach available, but at first it seems not to work as we would like.

So… what now?

EWS is… old. Why?

The Exchange Web Services are old, but still quite powerful and still supported for Exchange Online and OnPrem Exchanges. On the other hand we could use the Microsoft Graph, but - at least currently - there is not a single “contact” API available.

To mimic the GAL we would need to query List Users and List orgContacts, which would be ok, but the “orgContacts” has a “flaw”. “Hidden” contacts (“msexchhidefromaddresslists”) are returned from this API and we thought that this might be a NoGo for our customers.

Another argument for using EWS was, that we could support OnPrem and Online with one code base.

Docs from Microsoft

The good news is, that EWS and the Auth problem is more or less good documented here.

There are two ways to authenticate against the Microsoft Graph or any Microsoft 365 API: Via “delegation” or via “application”.

Delegation:

Delegation means, that we can write a desktop app and all actions are executed in the name of the signed in user.

Application:

Application means, that the app itself can do some actions without any user involved.

EWS and the application way

At first we thought that we might need to use the “application” way.

The good news is, that this was easy and worked. The bad news is, that the application needs the EWS permission “full_access_as_app”, which means that our application can access all mailboxes from this tenant. This might be ok for certain apps, but this scared us.

Back to the delegation way:

EWS and the delegation way

The documentation from Microsoft is good, but our “Service Account” usecase was not mentioned. In the example from Microsoft a user needs to manually login.

Solution / TL;DR

After some research I found the solution to use a “username/password” OAuth flow to access a single mailbox via EWS:

  1. Follow the normal “delegate” steps from the Microsoft Docs

  2. Instead of this code, which will trigger the login UI:

...
// The permission scope required for EWS access
var ewsScopes = new string[] { "https://outlook.office.com/EWS.AccessAsUser.All" };

// Make the interactive token request
var authResult = await pca.AcquireTokenInteractive(ewsScopes).ExecuteAsync();
...

Use the “AcquireTokenByUsernamePassword” method:

...
var cred = new NetworkCredential("UserName", "Password");
var authResult = await pca.AcquireTokenByUsernamePassword(new string[] { "https://outlook.office.com/EWS.AccessAsUser.All" }, cred.UserName, cred.SecurePassword).ExecuteAsync();
...

To make this work you need to enable the “Treat application as public client” under “Authentication” > “Advanced settings” in our AAD Application because this uses the “Resource owner password credential flow”.

Now you should be able to get the AccessToken and do some EWS magic.

I posted a shorter version on Stackoverflow.com

Hope this helps!

How to run a legacy WCF .svc Service on Azure AppService

$
0
0

Last month we wanted to run good old WCF powered service on Azures “App Service”.

WCF… what’s that?

If you are not familiar with WCF: Good! For the interested ones: WCF is or was a framework to build mostly SOAP based services in the .NET Framework 3.0 timeframe. Some parts where “good”, but most developers would call it a complex monster.

Even in the glory days of WCF I tried to avoid it at all cost, but unfortunately I need to maintain a WCF based service.

For the curious: The project template and the tech is still there. Search for “WCF”.

VS WCF Template

The template will produce something like that:

The actual “service endpoint” is the Service1.svc file.

WCF structure

Running on Azure: The problem

Let’s assume we have a application with a .svc endpoint. In theory we can deploy this application to a standard Windows Server/IIS without major problems.

Now we try to deploy this very same application to Azure AppService and this is the result after we invoke the service from the browser:

"The resource you are looking for has been removed, had its name changed, or is temporarily unavailable." (HTTP Response was 404)

Strange… very strange. In theory a blank HTTP 400 should appear, but not a HTTP 404. The service itself was not “triggered”, because we had some logging in place, but the request didn’t get to the actual service.

After hours of debugging, testing and googling around I created a new “blank” WCF service from the Visual Studio template got the same error.

The good news: It’s was not just my code something is blocking the request.

After some hours I found a helpful switch in the Azure Portal and activated the “Failed Request tracing” feature (yeah… I could found it sooner) and I discovered this:

Failed Request tracing image

Running on Azure: The solution

My initial thoughts were correct: The request was blocked. It was treated as “static content” and the actual WCF module was not mapped to the .svc extension.

To “re-map” the .svc extension to the correct handler I needed to add this to the web.config:

...<system.webServer>
    ...<handlers><removename="svc-integrated"/><addname="svc-integrated"path="*.svc"verb="*"type="System.ServiceModel.Activation.HttpHandler"resourceType="File"preCondition="integratedMode"/></handlers></system.webServer>
...

With this configuration everything worked as expected on Azure AppService.

Be aware:

I’m really not 100% sure why this is needed in the first place. I’m also not 100% sure if the name svc-integrated is correct or important.

This blogpost is a result of these tweets.

That was a tough ride… Hope this helps!

How to share an Azure subscription in a team

$
0
0

We at Sevitec are moving more and more workloads for us or our customers to Azure.

So the basic question needs an answer:

How can a team share an Azure subscription?

Be aware: This approach works for us. There might be better options. If we do something stupid, just tell me in the comments or via email - help is appreciated.

Step 1: Create a directory

We have a “company directory” with a fully configured Azure Active Directory (incl. User sync between our OnPrem system, Office 365 licenses etc.).

Our rule of thumb is: We create for each product team a individual directory and all team members are invited in the new directory.

Keep in mind: A directory itself costs you nothing but might help you to keep things manageable.

Create a new tenant directory

Step 2: Create a group

This step might be optional, but all team members - except the “Administrator” - have the same rights and permissions in our company. To keep things simple, we created a group with all team members.

Put all invited users in a group

Step 3: Create a subscription

Now create a subscription. The typical “Pay-as-you-go” offer will work. Be aware that the user who creates the subscription is initially setup as the Administrator.

Create a subscription

Step 4: “Share” the subscription

This is the most important step:

You need to grant the individual users or the group (from step 2) the “Contributor” role for this subscription via the “Access control (IAM)”. The hard part is to understand how those “Role assignment” affect the subscription. I’m not even sure if the “Contributor” is the best fit, but it works for us.

Pick the correct role assignment

Summary

I’m not really sure why such a basic concept is labeled so poorly but you really need to pick the correct role assignment and the other person should be able to use the subscription.

Hope this helps!

DllRegisterServer 0x80020009 Error

$
0
0

Last week I had a very strange issue and the solution was really “easy”, but took me a while.

Scenario

For our products we build Office COM Addins with a C++ based “Shim” that boots up our .NET code (e.g. something like this. As the nature of COM: It requires some pretty dumb registry entries to work and in theory our toolchain should “build” and automatically “register” the output.

Problem

The registration process just failed with a error message like that:

The module xxx.dll was loaded but the call to DllRegisterServer failed with error code 0x80020009

After some research you will find some very old stuff or only some general advises like in this Stackoverflow.com question, e.g. “run it as administrator”.

The solution

Luckily we had another project were we use the same approach and this worked without any issues. After comparing the files I notices some subtile differences: The file encoding was different!

In my failing project some C++ files were encoded with UTF8-BOM. I changed everything to UTF8 and after this change it worked.

My reaction:

(╯°□°)╯︵ ┻━┻

I’m not a C++ dev and I’m not even sure why some files had the wrong encoding in the first place. It “worked” - at least Visual Studio 2019 was able to build the stuff, but register it with “regsrv32” just failed.

I needed some hours to figure that out.

Hope this helps!


Update AzureDevOps Server 2019 to AzureDevOps Server 2019 Update 1

$
0
0

We did this update in May 2020, but I forgot to publish the blogpost… so here we are

Last year we updated to Azure DevOps Server 2019 and it went more or less smooth.

In May we decided to update to the “newest” release at that time: Azure DevOps Server 2019 Update 1.1

Setup

Our AzureDevOps Server was running on a “new” Windows Server 2019 and everything was still kind of newish - so we just needed to update the AzureDevOps Server app.

Update process

The actual update was really easy, but we had some issues after the installation.

Steps:

x

x

x

x

x

x

Aftermath

We had some issues with our Build Agents - they couldn’t connect to the AzureDevOps Server:

TF400813: Resource not available for anonymous access

As a first “workaround” (and a nice enhancement) we switched from HTTP to HTTPS internally, but this didn’t solved the problem.

The real reason was, that our “Azure DevOps Service User” didn’t had the required write permissions for this folder:

C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys

The connection issue went away, but now we introduced another problem: Our SSL Certificate was “self signed” (from our Domain Controller), so we need to register the agents like this:

.\config.cmd --gituseschannel --url https://.../tfs/ --auth Integrated --pool Default-VS2019 --replace --work _work

The important parameter is -gituseschannel, which is needed when dealing with “self signed, but Domain ‘trusted’“-certificates.

With this setting everything seemed to work as expected.

Only node.js projects or toolings were “problematic”, because node.js itself don’t use the Windows Certificate Store.

To resolve this, the root certificate from our Domain controller must be stored on the agent.

[Environment]::SetEnvironmentVariable(“NODE_EXTRA_CA_CERTS”, “C:\SSLCert\root-CA.pem”, “Machine”)

Summary

The update itself was easy, but it took us some hours to configure our Build Agents. After the initial hiccup it went smooth from there - no issues and we are ready for the next update, which is already released.

Hope this helps!

How to get all distribution lists of a user with a single LDAP query

$
0
0

In 2007 I wrote a blogpost how easy it is to get all “groups” of a given user via the tokenGroup attribute.

Last month I had the task to check why “distribution list memberships” are not part of the result.

The reason is simple:

A pure distribution list (not security enabled) is not a security group and only security groups are part of the “tokenGroup” attribute.

After some thoughts and discussions we agreed, that it would be good if we could enhance our function and treat distribution lists like security groups.

How to get all distribution lists of a user?

The get all groups of a given user might be seen as trivial, but the problem is, that groups can contain other groups. As always, there are a couple of ways to get a “full flat” list of all group memberships.

A stupid way would be to load all groups in a recrusive function - this might work, but will result in a flood of requests.

A clever way would be to write a good LDAP query and let the Active Directory do the heavy lifting for us, right?

1.2.840.113556.1.4.1941

I found some sample code online with a very strange LDAP query and it turns out: There is a “magic” ldap query called “LDAP_MATCHING_RULE_IN_CHAIN” and it does everything we are looking for:

var getGroupsFilterForDn = $"(&(objectClass=group)(member:1.2.840.113556.1.4.1941:= {distinguishedName}))";
                using (var dirSearch = CreateDirectorySearcher(getGroupsFilterForDn))
                {
                    using (var results = dirSearch.FindAll())
                    {
                        foreach (SearchResult result in results)
                        {
                            if (result.Properties.Contains("name") && result.Properties.Contains("objectSid") && result.Properties.Contains("groupType"))
                                groups.Add(new GroupResult() { Name = (string)result.Properties["name"][0], GroupType = (int)result.Properties["groupType"][0], ObjectSid = new SecurityIdentifier((byte[])result.Properties["objectSid"][0], 0).ToString() });
                        }
                    }
                }

With a given distinguishedName of the target user, we can load all distribution and security groups (see below…) transitive!

Combine tokenGroups and this

During our testing we found some minor differences between the LDAP_MATCHING_RULE_IN_CHAIN and the tokenGroups approach. Some “system-level” security groups were missing with the LDAP_MATCHING_RULE_IN_CHAIN way. In our production code we use a combination of those two approaches and it seems to work.

A full demo code how to get all distribution lists for a user can be found on GitHub.

Hope this helps!

Microsoft Graph: Read user profile and group memberships

$
0
0

In our application we have a background service, that “syncs” user data and group membership information to our database from the Microsoft Graph.

The permission model:

Programming against the Microsoft Graph is quite easy. There are many SDKS available, but understanding the permission model is hard.

‘Directory.Read.All’ and ‘User.Read.All’:

Initially we only synced the “basic” user data to our database, but then some customers wanted to reuse some other data already stored in the graph. Our app required the ‘Directory.Read.All’ permission, because we thought that this would be the “highest” permission - this is wrong!

If you need “directory” information, e.g. memberships, the Directory.Read.All or Group.Read.All is a good starting point. But if you want to load specific user data, you might need to have the User.Read.All permission as well.

Hope this helps!

How to self host Google Fonts

$
0
0

Google Fonts are really nice and widely used. Typically Google Fonts consistes of the actual font file (e.g. woff, ttf, eot etc.) and some CSS, which points to those font files.

In one of our applications, we used a HTML/CSS/JS - Bootstrap like theme and the theme linked some Google Fonts. The problem was, that we wanted to self host everything.

After some research we discovered this tool: Google-Web-Fonts-Helper

x

Pick your font, select your preferred CSS option (e.g. if you need to support older browsers etc.) and download a complete .zip package. Extract those files and add them to your web project like any other static asset. (And check the font license!)

The project site is on GitHub.

Hope this helps!

Today I learned (sort of) 'fltmc' to inspect the IO request pipeline of Windows

$
0
0

The headline is obviously a big lie, because I followed this twitter conversation last year, but it’s still interesting to me and I wanted to write it somewhere down.

Starting point was that Bruce Dawson (Google programmer) noticed, that building Chrome on Windows is slow for various reasons:

Trentent Tye told him to disable the “filter driver”:

If you have never heard of a “filter driver” (like me :)), you might want to take a look here.

To see the loaded filter driver on your machine try out this: Run fltmc (fltmc.exe) as admin.

x

Description:

This makes more or less sense to me. I’m not really sure what to do with that information, but it’s cool (nerd cool, but anyway :)).

Viewing all 357 articles
Browse latest View live