Create IP filter rules faster with azure cli for Azure IoTHub, DPS and App Services

Implementing network security for the IoTHub and the Device Provisioning Service (DPS) in Azure is a challenging task as simply putting it into a VNET is not always an easy option. Sometimes you can restrict the IP Addresses to the ranges of your provider, so that at least some restrictions are made to the network communication. Most of our cloud resources are in terraform, where we created a variable for the IP restrictions as follows:

variable "ip_ranges" {
  type = list(string)
  default = [
    "1.2.3.4/28",
    "1.2.3.4/29",
    "1.2.3.4/30"
  ]
}

Then using this block from an app service would look like

resource "azurerm_app_service" "your_service" {
  name                    = "yourservice"
  location                = "yourlocation"
  resource_group_name     = "yourresourcegroup"
  app_service_plan_id     = "yourplan"
  https_only              = true

  site_config {
    always_on = "true"
    linux_fx_version = "DOTNETCORE|3.1"

    dynamic "ip_restriction" {
      for_each = var.ip_ranges
      content {
        ip_address = ip_restriction.value
        action = "Allow"
      }
    }
  }

I thought I will just do the same with the IoTHub in Terraform as follows:

resource "azurerm_iothub" "device_hub" {
  name                = "iot_hub_name"
  resource_group_name = "resource_group_name"
  location            = "location"
  public_network_access_enabled = true

  sku {
    name     = "S1"
    capacity = "1"
  }

  dynamic "ip_filter_rule" {
    for_each = var.ip_ranges
    content {
      name = ip_filter_rule.value
      ip_mask = ip_filter_rule.value
      action = "Accept"
    }
  }
}

But then there was an error. Terraform was composing the wrong body of the REST request to the Azure management endpoint.

So I’ve decided to use our escape hatch to just call an null_resource and Azure CLI command for creating the ip filter rules. I describe the reasons and drawbacks of this solution in the end of this article.The errors, which I’ve had:
Failure sending request: StatusCode=0 — Original Error: Code=”Failed” Message=”The async operation failed.” InnerError={“unmarshalError”:”json: cannot unmarshal number into Go struct field serviceErrorInternal.code of type string”} AdditionalInfo=[{“code”:400116,”httpStatusCode”:”BadRequest”,”message”:”Valid Connection string should be provided. endpointName: ***-endpoint. If you contact a support representative please include this correlation identifier: 6479f0f1-57dd-4c74-8b05-1bd96c2cf044, timestamp: 2021-06-14 14:30:12Z, errorcode: IH400116.”}]
and
devices.IotHubResourceClient#CreateOrUpdate: Failure sending request: StatusCode=0 — Original Error: Code=”Failed” Message=”The async operation failed.” InnerError={“unmarshalError”:”json: cannot unmarshal number into Go struct field serviceErrorInternal.code of type string”} AdditionalInfo=[{“code”:400059,”httpStatusCode”:”BadRequest”,”message”:”Request body validation failed. If you contact a support representative please include this correlation identifier: 589873a8-e25d-41b3-aebc-eaef22fa7aa0, timestamp: 2021-06-14 14:33:45Z, errorcode: IH400059.”}]

Creating the IP filter rules takes however a LOT of TIME if done as follows (described here):

az iot hub update --name MyIotHub --add properties.ipFilterRules filter_name=test-rule action=Accept ip_mask=127.0.0.0/31

For DPS you don’t even have the option to as at the time of writing the cli command did not even exist.

The resolution is a more generic CLI command, that I’ve just found (here): az resource update
With this you can update the whole ipFilterRules block of the resource at once, and it also works for the DPS as follows:

$joined_filters = ($ip_ranges.Split(',') | % {"{'action':'Accept','filterName':`'$($_.Replace('/','-').Replace('.','_'))`','ipMask':`'$_`'}"}) -join ','
$filter_json = "`"[$joined_filters]`""
az resource update -n $iothub_name -g $resource_group_name --resource-type Microsoft.Devices/IotHubs --set properties.ipFilterRules=$filter_json
az resource update --ids $dps_id --resource-type Microsoft.Devices/ProvisioningServices --set properties.ipFilterRules=$filter_json

# A working statement to know what is being built together: az resource update -n youriothub -g yourrg --resource-type Microsoft.Devices/IotHubs --set properties.ipFilterRules='[{"action":"Accept","filterName":"TrustedIP","ipMask":"192.168.0.1/32"},{"action":"Accept","filterName":"TrustedIP2","ipMask":"192.168.0.2/32"}]'

I use a comma separated string as an input for the $ip_ranges.

Oh yeah, I’ve promised to elaborate on why I’ve chosen to use. Frankly, it takes more time to fix the solution in terraform. Yepp, it is much harder to maintain resources created by the CLI. Destroy also does not happen automatically, and so on. I know, I know. But currently, we live with this way.

Posted in Technical Interest | Tagged , , , , , , , , , , | 1 Comment

Azure Table Storage with Managed Identities

At the time of writing this post there are no official support for Azure Table Storages with Managed Identities which is a huge pity. I’m focusing on solutions, which use the `Azure.Identities` and the class `DefaultAzureCredential`. My goal was also not to mix it up with token requests of the old `AppAuthentication` way. For Storage services like SQL Server or Blobs (Cool SQL and Blob solution) there are really nice solutions. In a similar way I’ve also found a solution for the queues in the SDK (Search for QueueServiceClient), where I could reuse the cool separation of concerns way with `AddAzureClients`.

Yepp, but there is no really cool one for Table Storages, so I decided to solve it on my own, as I like the concept of not needing to configure credentials for our services. My solution was inspired by the way I’ve found a for Cosmos DB (CosmosDB with Managed Identity), for which there are also no Managed Identity support. Short explained, I make a call to the management API to get the access keys, and create the client with it. For this the Managed Identity has to have at least Reader Role Assignment auf der Storage Account.

So, lets see the code, where I took out some customer specifics, so it might not work, because of some syntax issues, but I hope you’ll get the point.

<pre>// Classes for the deserialization of the management request
public class StorageAccountListKeysResult
{
    public StorageAccountListKeysResultElement[] Keys { get; set; }
}
public class StorageAccountListKeysResultElement
{
    public string KeyName {get;set;}
    public string Permissions {get; set;}
    public string Value {get; set;}
}
// Table Storage Configuration
public class TableStorageConfiguration
{
    public TableStorageConfiguration(string connectionString, string subscriptionId, string envTag)
    {
        SubscriptionId = subscriptionId;
        ConnectionString = connectionString;
        EnvTag = envTag ?? throw new ArgumentNullException(nameof(envTag));
        Validate();
    }

    public string SubscriptionId { get; }
    public string EnvTag { get; }
    public string ConnectionString { get; }

    public bool UseDefaultAzureCredential =&amp;gt; string.IsNullOrEmpty(ConnectionString);

    public string ResourceGroupName =&amp;gt; $"yourconvention-{EnvTag}-rg";
    public string StorageAccountName =&amp;gt; $"yourconvention{EnvTag}storage";

    private void Validate()
    {
        if (string.IsNullOrEmpty(SubscriptionId) &amp;amp;&amp;amp; string.IsNullOrEmpty(ConnectionString) ||
            !string.IsNullOrEmpty(SubscriptionId) &amp;amp;&amp;amp; !string.IsNullOrEmpty(ConnectionString))
        {
            throw new Exception(
                $"Exactly one of the {nameof(SubscriptionId)} (authenticating via DefaultAzureCredential) or the {nameof(ConnectionString)} (with credentials inside) must be set.");
        }
    }
}</pre>

The important part comes here…

<pre>using System;
using System.Net.Http;
using System.Net.Http.Headers;
using Autofac;
using Azure.Identity;
using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.Auth;

namespace CrossCutting.TableStorage
{
    public static class TableStorageContainerBuilderExtension
    {
        private static readonly string[] ManagementScopes =
        {
            "https://management.azure.com/.default"
        };

        public static void RegisterCloudTableClient(this ContainerBuilder builder, TableStorageConfiguration configuration)
        {
            builder.Register(c =&gt;
                {
                    if (configuration.UseDefaultAzureCredential)
                    {
                        var storageCredentials = GetStorageCredentials(configuration);
                        var cloudStorageAccount = new CloudStorageAccount(storageCredentials, true);
                        return cloudStorageAccount.CreateCloudTableClient();
                    }
            
                    CloudStorageAccount account = CloudStorageAccount.Parse(configuration.ConnectionString);
                    return account.CreateCloudTableClient();
                });
        }

        private static StorageCredentials GetStorageCredentials(TableStorageConfiguration config)
        {
            var credential = new DefaultAzureCredential();
            var tokenContext = new Azure.Core.TokenRequestContext(ManagementScopes);
            var token = credential.GetToken(requestContext: tokenContext);

            HttpClient httpClient = new HttpClient();
            httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token.Token);

            var endpoint =
                $"https://management.azure.com/subscriptions/{config.SubscriptionId}/resourceGroups/{config.ResourceGroupName}/providers/Microsoft.Storage/storageAccounts/{config.StorageAccountName}/listKeys?api-version=2021-01-01&amp;amp;$expand=kerb";
            var result = httpClient.PostAsync(endpoint, new StringContent("")).Result;
            result.EnsureSuccessStatusCode();
            var tableStorageCredentials = result.Content.ReadAsAsync&amp;lt;StorageAccountListKeysResult&amp;gt;().Result;
            if (tableStorageCredentials.Keys?.Length &amp;gt; 0)
            {
                return new StorageCredentials(config.StorageAccountName, tableStorageCredentials.Keys[0].KeyName, tableStorageCredentials.Keys[0].Value);
            }

            throw new Exception($"No access keys could be retrieved from the Azure Management Api");
        }
    }
}
</pre>

Finally the usage…

// The usage
builder.RegisterCloudTableClient(new TableStorageConfiguration(...));
Posted in Technical Interest | Tagged , , , , , , , , | Leave a comment

Nested for_each in terraform used for diagnostic settings of multiple azure storages

There were multiple challenges hiding behind every single word in the title. One of the most important solution to one of them was – as the very often – found on a stackoverflow site, to find the right target resource_id and the right categories. For the sake of simplicity, I only share the code with you from terraform, which is relevant, it is not working without you implementing the referenced resources. It will just give you the hint on how to do create a nice and readable solution for creating the 15 resources with the right log and metric categories.

locals {
  event_storage_account_name          = "your_globallyunique_name_for_events"
  sql_audit_storage_account_name      = "your_globallyunique_name_for_sqlaudit"
  appsvc_storage_account_name         = "your_globallyunique_name_for_appsvc"
  storage_names                       = [local.event_storage_account_name, local.sql_audit_storage_account_name, local.appsvc_storage_account_name]
  storage_services = ["", "blobServices", "fileServices", "tableServices", "queueServices"]
  storage_diagnostic_services = flatten([
    for storage_name in local.storage_names: [
      for service in local.storage_services: {
        storage_name = storage_name
        storage_service = service
      }
    ]
  ])
}

resource "azurerm_monitor_diagnostic_setting" "storage_table_diagnostics" {
  for_each = {for storage_diagnostic_service in local.storage_diagnostic_services : "${storage_diagnostic_service.storage_name}.${storage_diagnostic_service.storage_service}" => storage_diagnostic_service}
  name               = "${each.value.storage_name}_${each.value.storage_service}_diagnosticSettings"
  target_resource_id = "/subscriptions/${var.subscription_id}/resourceGroups/${local.resource_group_name}/providers/Microsoft.Storage/storageAccounts/${each.value.storage_name}/${each.value.storage_service}${each.value.storage_service == "" ? "" : "/default"}"
  log_analytics_workspace_id = azurerm_log_analytics_workspace.main_loganalytics_workspace.id

  dynamic "log" {
    for_each = each.value.storage_service == "" ? [] : ["StorageDelete", "StorageRead", "StorageWrite"]
		
    content {
      category = log.value
      enabled  = true

      retention_policy {
        enabled = true
        days = var.cost_intensive_settings[local.env_type].log_retention_days
      }
    }
  }

  dynamic "metric" {
    for_each = ["Transaction", "Capacity"]
		
    content {
      category = metric.value
      enabled  = true

      retention_policy {
        enabled = true
        days = var.cost_intensive_settings[local.env_type].log_retention_days
      }
    }
  }
}
Posted in Technical Interest | Tagged , , , , , , , , , | Leave a comment

Asp .NET Core 3.0 and Integration testing

If you are migrating from .NET Core 2.2 to 3.0 you’ve probably met the Migration Guide.

For integration testing you might create a TestServer based on an an existing Startup class. Certainly you want to be as close to the production environment to what you use for your service, still there are few options, which you want to have different. One such an option is the Authorization. In the dotnet core 2.2 there was a trick with the filters as follows:

s.AddMvc(options => options.Filters.Add(new AllowAnonymousFilter()));

If you don’t deal with it differently, then you will soon get 401 - Unauthorized responses. Sure you can solve these by configuration but it might end up in a configuration chaos.

Another solution is to use kind of a PassThroughHandler as follows

var webHostBuilder = new WebHostBuilder()
    .UseEnvironment("Test")
    .UseStartup()
    .ConfigureTestServices(s =>
    {
        s.AddSingleton();
        s.AddAuthorization(options =>
        {
            options.DefaultPolicy = new AuthorizationPolicyBuilder().AddRequirements(new PassThroughRequirement()).Build();
        });
    });

_testServer = new TestServer(webHostBuilder);

The classes referenced above are as follows, which I implemented based on ASP NET CORE 3.0:

using System.Threading.Tasks; 
using Microsoft.AspNetCore.Authorization; 
 
namespace YourNamespace
{ 
    public class PassThroughHandler : AuthorizationHandler 
    { 
        protected override Task HandleRequirementAsync(AuthorizationHandlerContext context, PassThroughRequirement requirement) 
        { 
            context.Succeed(requirement); 
            return Task.CompletedTask; 
        } 
    }

    public class PassThroughRequirement : IAuthorizationRequirement 
    { 
    }  
}
Posted in Technical Interest | Leave a comment

Probable bug in saml2aws workign with AWS CLI on Windows – The config profile (saml) could not be found

I’m working with the saml2aws since a while, and it was working pretty fine. However today I had a topic, which took me few hours to resolve.
At a certain time point I’ve got the error message: “The config profile (saml) could not be found”
Then I’ve called the configure and deleted the credentials file, then recreated, just removed the default profile an so on, no results.
In one of the verbose logs using the AWS CLI and the saml2aws I’ve seen the path started with N:\….., then I’ve found the file on (N:\.aws\credentials – which was my %HOMEDRIVE%%HOMEPATH%), where the file was created. This one I could then copy to the right place, what I’m doing since, when my token session expires.

Posted in Technical Interest | Tagged , , , | 1 Comment

Debugging Python Airflow DAG in Docker Container with Visual Studio

I need to work little more cross-platform lately, so I have a lot of things to blog on.

I actually managed to debug into an Airflow DAG written in python with Visual Studio running in a Docker container under Windows. I’ve used the ptvsd python package for it.

The most problems were caused by the line endings like :
– standard_init_linux.go:185: exec user process caused “no such file or directory”
Where the resolution is quite simple (described here and here), just save the files used by docker with Unix line endings (notepad++ > Edit -> EOL Conversion).

For the whole scenario, I’ve been using the puckel docker container and just added the ptvsd among the pip installed python packages. I didn’t manage to attach the requirements.txt as a volume – as described in its doc – under windows, so I’ve forked the repo and changed the Dockerfile.

Of course the changing of the attachment URL (in VS 2017) is also not really working 100 percent (as also described here), so port and protocol is to be added here as well: tcp://secret@localhost:5678/ where the refresh button is not working, so only pressing enter in the textbox after entering URL is usable.

Even though, that this is an obvious problem and the resolution is really simple, it took me a lot of time to figure out, what is happening there. Some types of changes broke the container, some others not :) Until the time I’ve realized, that the changes in multiple lines were the problem – omg.

 

 

Posted in Technical Interest | Tagged , , , , , , | Leave a comment

TFS 2017 (on premises) Get Latest Problem

There is a new feature in the Team Foundation Server 2017 (on prem.). You can use variables in the repository mappings. There is however a bug related to the “Get Sources” step.
If there is at least one not dynamically mapped repository, then the latest version gets calculated based on that. It usually results in not having the latest version for the dynamically mapped repository paths. The following trick solves this problem:

Just add an $(emtpy.path) variable usage into the “static” mappings to ensure the latest version for the build.

I see it in fact as a bug in TFS.

Posted in Technical Interest | Tagged , , , , | Leave a comment

Useful when starting with a Raspberry PI with NodeJS

Every time I get a new Raspi and I want to use NodeJS on it I need to do the same things earlier or later for that purpose I create a list here. The references can be found at the end.

# Update the PI's OS and its packages 
sudo apt-get update && sudo apt-get dist-upgrade  
sudo rpi-update  

# Start something on startup
sudo nano /etc/rc.local

# Set up remote desktop
sudo apt-get install xrdp

# Set up the right resolution (to be put to the HDMI mode uncomment section)
sudo nano /boot/config.txt 
hdmi_group=1 
hdmi_mode=16

# Install Node.js on PI
wget http://node-arm.herokuapp.com/node_latest_armhf.deb
sudo dpkg -i node_latest_armhf.deb
# Test it with (restart Terminal if needed)
node -v
# If the nodejs-legacy conflicts
sudo apt-get remove nodejs nodejs-legacy
# After it you may need to do the followings
sudo apt-get update
sudo apt-get install node-gyp
# Edit the /usr/include/nodejs/deps/v8/include/v8.h file as described here https://github.com/fivdi/onoff/wiki/Node.js-v0.10.29-and-native-addons-on-the-Raspberry-Pi

# In case of necessity
sudo npm install onoff
# In case of necessity
sudo npm install -g node-gyp
# In case of necessity
sudo npm install node-dht-sensor

References

http://www.linuxx.eu/2014/07/mmal-mmalvccomponentenable-failed-to.html
https://coderwall.com/p/ezb3aw/node-garagepi-the-garage-door-opener-using-node-js
https://www.jeremymorgan.com/tutorials/raspberry-pi/how-to-remote-desktop-raspberry-pi/
https://www.raspberrypi.org/forums/viewtopic.php?f=66&t=130217

Further useful links for reading
https://weblogs.asp.net/bleroy/getting-your-raspberry-pi-to-output-the-right-resolution

Auto Running Programs-Command Line

Posted in Technical Interest | Tagged , | Leave a comment

The best and most important page of the Git Book

Long time passed by since I’ve been writing a blog post. The reason for that was not only , that I really didn’t have so many things to blog about, but also ‘cos I’ve switched to a new company. The first article is now also not a big thing, it is about a useful link to read when learning Git branching.

Have fun on reading it :)

Posted in Technical Interest | Leave a comment

SQL Server Profiler Replay function

Recently I wanted to see how a server behaves running some SQL Statements parallel against a database on it. I wanted to simulate a real user session, so I decided to record a scenario locally, and run that against the server. I planed it using the Replay function of the SQL Server Profiler. For recording a “Replayable” session few settings are to be made, which differ from the Standard (default) template:

  • When starting a new trace go to the Events Selection Tab
  • Check the Show all eventscheckbox and add the following events:
    • Cursors group
      • CursorExecute
      • CursorOpen
      • CursorPrepare
    • Stored Procedures group
      • RPC Output Parameter
      • RPC: Starting
    • TSQL group
      • Exec Prepared SQL
      • Prepare SQL
  • Uncheck the Show all events
  • Check the Show all columns
  • For every event checkthefollowingcheckboxes in the grid
    • DatabaseID
    • DatabaseName

There you go. You can start then replay sessions :)

Posted in Technical Interest | Tagged , , , , | Leave a comment

Browser (javascript) timezone to .NET timezone (DST problem) with fallback

Yet another timezone post. Did I mention that I don’t like timezones? I don’t understand why the browser cannot only give us a timezone identifier according to a standard, like IANA.

Anyway I identified a problem related to Daylight Saving Time (DST). I needed to create an Excel file (and some PDFs) on the server side as a response to a button click. I thought immediately on the problem, that I need to send the timezone offset minutes from the browser, so that I can present correct times for the person using the app. We store date values in UTC generally in our database.

“Luckily” I started with this task on a day which was in winter time and I finished the task on a day which was already in summer time. I created a report on Friday and on Monday and saw that the corrected time was different. Oh yeah, this means, that the time has to be adopted to “local time” not according to the current timezone offset, but according to the timezone offset at the time of the “local time” according to the browser’s timezone. The solution is easy – I thought – I just need to send the timezone information to the server instead of the offset. It turned out not to be so easy.

First I wanted to get a better understanding about the topic, so I started here: http://encosia.com/how-i-handle-json-dates-returned-by-aspnet-ajax/. I’ve found everywhere pretty ugly solutions for it, like this. I just couldn’t believe, that there is no better way.  

Then I’ve found a solution for the topic with the moment.js. It was pretty hard to figure out, how to create the moment-timezone-data.js file, till I’ve found it pressing the Data button on their site (http://momentjs.com/timezone/data/), and clicking together a “Browser” variant, and copied it into the above named js file. The problem was, that the solution didn’t work for the specific time zone I needed (in Australia). Sad, but this lead to the situation, that I’ve understood the problem really.

My solution was in the end the following:

  1. As a main solution
    1. Use HTML5 Geolocation API (navigator.geolocation) to get longitude and latitude information, described e.g.: over here.
    2. The longitude and latitude information can then be combined to determine the timezone as described here.
  2. As a fallback if the user doesn’t share his/her geolocation
    1. Added the javascript library jsTimezoneDetect (https://bitbucket.org/pellepim/jstimezonedetect)
    2. Sent the IANA timezone name calculated to the server
    3. Used the Noda Time library (available on NuGet: NodaTime) to convert it to windows time zone identifier as described here: http://stackoverflow.com/questions/17348807/how-to-translate-between-windows-and-iana-time-zones.

A much more precise way is to use the new HTML5 Geolocation API, however the user cannot be forced to allow us. The jsTimezoneDetect is quite OK as a fallback solution. It worked for me for the timezones I certainly needed.

I also know that Noda Time is a bit of an overkill, as one could also just use the the Unicode CLDR for the IANA to Windows identifier resolution, but I was now just too lazy :)

The longitude and latitude information can also be combined with the data gotten from online services as described here. I didn’t like this way at all, therefor applied the solution above.

Posted in Technical Interest | Tagged , , , , , , , , , , | Leave a comment

Causes of Bugs

You are probably writing “Bug-Free” code right? I also don’t. Really sad…
I do want to, our customers want that. For that reason I created a statistic in the projects I worked in, to find out which are the most usual causes of bugs. The poll below already contains the results of it. If you solve a Bug, please come back here and fill out this poll!

I categorized it based on the actions a developer made when fixing the bug. Be careful when filling the poll as it is affecting the results heavily. If you don’t understand something from the next poll, read the explanation carefully after it. “Process and Concept” problems are uncovered, please read the explanation to know, what I categorized as such.

Explanation

You can select one or more of the categories above or suggest a new one as a comment. I’m mainly interested in the actions you’ve made when fixing the bug – and of course it could be more of the above ones. I’ll try to explain some of the categories, if you have further questions please comment on this blog.

  • If I’ve fixed names of variables, classes or methods I’ve selected the Wrong Naming category
  • When I needed to reduce the nesting of the “if” statements to understand what the code is about i checked the Reduce nesting option.
  • Removing a duplicate for fixing a bug resulted in a point for the Code duplication
  • If I’ve reduced the size of a class or method by extracting a class or method I’ve increased at the Too long class or method category.
  • Adding an argument check or a null reference check ended with a selection of the category: Argument check
  • If needed to make a big refactoring including of a class based on its dependencies (like too many injected constructor parameters), I’ve increased at the Reduce dependent and depending types. Read further here.
  • The cohesion is a bigger topic. To fix bugs in this way the cohesion was increased to a higher level (to functional – preferably). Simply said: it is about grouping of modules more semantically. If I’ve grouped methods or fields of a class differently (with only slight changes to them), or moved classes of a namespace into other namespaces, then I’ve increased the votes here. If you want to read further check this Wikipedia site.
  • When programming object-oriented we should follow the SOLID principles. If one didn’t completely do that and I fixed it accordingly, I’ve voted to one of these categories.
  • If a bug came over and over as a regression bug, I’ve introduced a unit-test for it. In fact I try to add a unit test for every bug I need to fix as I find those unit tests the most useful. Anyway, in these cases I’ve selected the corresponding category

The Story

Few years ago I started to try to increase my coding quality by applying static code analysis tools like Microsoft Code Analysis and NDepend. Unfortunately not every project size can afford to have NDepend applied to it. I’ve read the clean code book. Applied further design patterns. Tested even heavier. Added more unit tests reaching high coverage. Started to develop test driven. Applied better architecture and so on…
I’ve collected a lot of weapons against bugs. Not only technical but also interpersonal skills. In fact I’m aware, that these are the most usual causes of bugs and problems with the software. In my last measurement it was around 50%. This is however not the scope of this post – I don’t want to talk about “Process and Concept” problems or noises in the communication starting from the customer along the way to the developers – this should be the topic of requirement engineering and other software development methodologies. How did I separate this from the above categories? I said to myself, that if during the solution of the bug, “tons” of files were changed, or a “lot” of new code was written, then I categorized it as a concept bug. But as I said already, this is out of the scope of this post. What I want to cover is the following question.

Which development mistakes lead more often to bugs?

The poll above doesn’t give you an answer to that. It just tells you, what types of behaviors get fixed more often. For example the amount of bug fixes containing the removal of duplicated code doesn’t point out the bugs, where it wasn’t recognized as a source of it or where the solution of the problem was not to remove the duplicate. Still I believe, this gives you a good hint, where to give more focus, for that reason, it is part of my definition of done.

Is this question to be asked at all? We should do everything correctly according to every rules of the development, right? I guess in many cases it is not possible – in time, in budget. Probably most of the developers met already the “big ball of mud” styled worse “spaghetti” code ever. I did it many times. Many times I wrote it myself – sad again, but my “past me” didn’t know what my “future me” can write better. Still we wanted to satisfy the customer needs, many times successfully.
I don’t know which categorization is the best related to this question. I created this one when I’ve been working in a really high quality web application. I based on the check-ins related to the bugs (using Team Foundation Server) and look at what actions are taken when fixing it. I’ve created these categories as a starting point and I hope you will be happy to contribute and see the results.

Posted in Technical Interest | Tagged , , , , , , , | 2 Comments

Google App Engine java.lang.NoClassDefFoundError for BasisLibrary

I’ve just started to experiment with the Google App Engine (GAE) using Java Servlets. To tell the truth I also wanted to play a little bit with the technology. The application is supposed to transform an XML with the help of an XSLT file. Has some authentication and authorization implemented. Reads from blob storage and google drive, sends emails and so on.

I deployed many times during the development and everything was working. Then I started to use xsl:params with the help of the transformer.setParameter method. Locally – of course – everything worked fine the whole time. After deploying it I got a 500 error. Checked the logs: java.lang.NoClassDefFoundError: com.sun.org.apache.xalan.internal.xsltc.runtime.BasisLibrary is a restricted class. Please see the Google App Engine developer’s guide for more details.

Fine I thought, and what can I do about it? This was library used by the Transformer internally. I find it really strange, that google restricts such classes. Anyway I ended up implementing this functionality on my own replacing the parameters as Strings. Nice hack. By the way you should use xsl:variable instead of params if you wanna make it work…

After that I had another problem: ConfigurationException: Translet class loaded, but unable to create translet instance.
Oh my god… I thought. But luckily I’ve found a solution here.
From here you have to go to the Xalan download area. Interestingly enough the library didn’t change since 2007.

Posted in Technical Interest | Tagged , , , , , , | Leave a comment

Few considerations about .NET and Timezones in web scenarios

Probably every web developer has already had topics around time zones. I again had it. I just want to make another notice for the developer community around the globe (and my future me) how I came around this topic now.
In .NET there are two ways to get the time zone (why having only one):

TimeZone.CurrentTimeZone // type System.TimeZone
TimeZoneInfo.Local // type System.TimeZoneInfo

The TimeZoneInfo is the newer one satisfying some features required in Vista or something :). With this class one could also convert times into different time zones using the TimeZoneInfo.Id:

// TimeZoneInfo.Local.Id is something like "W. Europe Standard Time"
TimeZoneInfo.ConvertTimeBySystemTimeZoneId(DateTime.UtcNow, TimeZoneInfo.Local.Id);

Altogether this whole topic is not so nice. It is not so easy to reproduce problems when developing against a local web server. I managed to do a “hacky” workaround:

  1. Stop the IIS Express processes
  2. Set local time zone at the taskbar’s clock usually on the bottom right of your desktop – so defining the time zone of the web server
  3. Start the debugging – this starts a IIS Express for you with your chosen time zone
  4. Add watch TimeZoneInfo.Local – so that you see what the current time zone is.
    You will need it, to be always aware what the current server time zone is. It was also written somewhere that this information is not thread and not even process safe.
  5. While running the app, change to your favorite client time zone on your clock, so that your browser’s time zone is according to your problem to reproduce

My specific problem was on Kendo grids where binding to a DateTime struct happens automatically by specifying the DateTimeKind…
I know it is hacky – but it’s working – hope I helped someone.

Posted in Technical Interest | Tagged , | Leave a comment

Entity Framework Code First Migrations in multiple branch scenarios

The starting information how the situation related to merging migrations can be read in this great post from Anders Abel.

I just have an extension to it for a specific situation.

Simple case merge migration

This is the case when the merged migration (from the source branch) is the last migration on the target branch. The code to that is fairly easy described in the above linked post. You only need to re-scaffold the code first model (in the designer file of the migration). Just a notice: don’t use the “force” option here, so you don’t add all the changes to your merged migration. Example below:

Add-Migration MergedMigrationClassName

Complicated case merge migration

If your migration is not the last one in the target branch, you have to do a couple of other things.

  1. Clean solution
  2. Delete database (for simplicity, it would be enough to update/downgrade the database to the target merged migration)
  3. Exclude migrations after the merged migration from the project
  4. Re-scaffold the merged migration as described above
  5. Include one of the excluded migrations
  6. Re-scaffold it as described above
  7. Repeat step 5 and 6 till you included all migrations

This way was working for me, however it is not so intuitive. Please comment on my post if you find an easier way.

Posted in Technical Interest | Tagged , , , , | Leave a comment