There is a new feature in the Team Foundation Server 2017 (on prem.). You can use variables in the repository mappings. There is however a bug related to the “Get Sources” step.
If there is at least one not dynamically mapped repository, then the latest version gets calculated based on that. It usually results in not having the latest version for the dynamically mapped repository paths. The following trick solves this problem:
Just add an $(emtpy.path) variable usage into the “static” mappings to ensure the latest version for the build.
Every time I get a new Raspi and I want to use NodeJS on it I need to do the same things earlier or later for that purpose I create a list here. The references can be found at the end.
# Update the PI's OS and its packages
sudo apt-get update && sudo apt-get dist-upgrade
# Start something on startup
sudo nano /etc/rc.local
# Set up remote desktop
sudo apt-get install xrdp
# Set up the right resolution (to be put to the HDMI mode uncomment section)
sudo nano /boot/config.txt
# Install Node.js on PI
sudo dpkg -i node_latest_armhf.deb
# Test it with (restart Terminal if needed)
# If the nodejs-legacy conflicts
sudo apt-get remove nodejs nodejs-legacy
# After it you may need to do the followings
sudo apt-get update
sudo apt-get install node-gyp
# Edit the /usr/include/nodejs/deps/v8/include/v8.h file as described here https://github.com/fivdi/onoff/wiki/Node.js-v0.10.29-and-native-addons-on-the-Raspberry-Pi
# In case of necessity
sudo npm install onoff
# In case of necessity
sudo npm install -g node-gyp
# In case of necessity
sudo npm install node-dht-sensor
Long time passed by since I’ve been writing a blog post. The reason for that was not only , that I really didn’t have so many things to blog about, but also ‘cos I’ve switched to a new company. The first article is now also not a big thing, it is about a useful link to read when learning Git branching.
Recently I wanted to see how a server behaves running some SQL Statements parallel against a database on it. I wanted to simulate a real user session, so I decided to record a scenario locally, and run that against the server. I planed it using the Replay function of the SQL Server Profiler. For recording a “Replayable” session few settings are to be made, which differ from the Standard (default) template:
When starting a new trace go to the Events Selection Tab
Check the Show all eventscheckbox and add the following events:
Stored Procedures group
RPC Output Parameter
Exec Prepared SQL
Uncheck the Show all events
Check the Show all columns
For every event checkthefollowingcheckboxes in the grid
There you go. You can start then replay sessions :)
Yet another timezone post. Did I mention that I don’t like timezones? I don’t understand why the browser cannot only give us a timezone identifier according to a standard, like IANA.
Anyway I identified a problem related to Daylight Saving Time (DST). I needed to create an Excel file (and some PDFs) on the server side as a response to a button click. I thought immediately on the problem, that I need to send the timezone offset minutes from the browser, so that I can present correct times for the person using the app. We store date values in UTC generally in our database.
“Luckily” I started with this task on a day which was in winter time and I finished the task on a day which was already in summer time. I created a report on Friday and on Monday and saw that the corrected time was different. Oh yeah, this means, that the time has to be adopted to “local time” not according to the current timezone offset, but according to the timezone offset at the time of the “local time” according to the browser’s timezone. The solution is easy – I thought – I just need to send the timezone information to the server instead of the offset. It turned out not to be so easy.
Then I’ve found a solution for the topic with the moment.js. It was pretty hard to figure out, how to create the moment-timezone-data.js file, till I’ve found it pressing the Data button on their site (http://momentjs.com/timezone/data/), and clicking together a “Browser” variant, and copied it into the above named js file. The problem was, that the solution didn’t work for the specific time zone I needed (in Australia). Sad, but this lead to the situation, that I’ve understood the problem really.
My solution was in the end the following:
As a main solution
Use HTML5 Geolocation API (navigator.geolocation) to get longitude and latitude information, described e.g.: over here.
The longitude and latitude information can then be combined to determine the timezone as described here.
As a fallback if the user doesn’t share his/her geolocation
A much more precise way is to use the new HTML5 Geolocation API, however the user cannot be forced to allow us. The jsTimezoneDetect is quite OK as a fallback solution. It worked for me for the timezones I certainly needed.
I also know that Noda Time is a bit of an overkill, as one could also just use the the Unicode CLDR for the IANA to Windows identifier resolution, but I was now just too lazy :)
The longitude and latitude information can also be combined with the data gotten from online services as described here. I didn’t like this way at all, therefor applied the solution above.
You are probably writing “Bug-Free” code right? I also don’t. Really sad…
I do want to, our customers want that. For that reason I created a statistic in the projects I worked in, to find out which are the most usual causes of bugs. The poll below already contains the results of it. If you solve a Bug, please come back here and fill out this poll!
I categorized it based on the actions a developer made when fixing the bug. Be careful when filling the poll as it is affecting the results heavily. If you don’t understand something from the next poll, read the explanation carefully after it. “Process and Concept” problems are uncovered, please read the explanation to know, what I categorized as such.
You can select one or more of the categories above or suggest a new one as a comment. I’m mainly interested in the actions you’ve made when fixing the bug – and of course it could be more of the above ones. I’ll try to explain some of the categories, if you have further questions please comment on this blog.
If I’ve fixed names of variables, classes or methods I’ve selected the Wrong Naming category
When I needed to reduce the nesting of the “if” statements to understand what the code is about i checked the Reduce nesting option.
Removing a duplicate for fixing a bug resulted in a point for the Code duplication
If I’ve reduced the size of a class or method by extracting a class or method I’ve increased at the Too long class or method category.
Adding an argument check or a null reference check ended with a selection of the category: Argument check
If needed to make a big refactoring including of a class based on its dependencies (like too many injected constructor parameters), I’ve increased at the Reduce dependent and depending types. Read further here.
The cohesion is a bigger topic. To fix bugs in this way the cohesion was increased to a higher level (to functional – preferably). Simply said: it is about grouping of modules more semantically. If I’ve grouped methods or fields of a class differently (with only slight changes to them), or moved classes of a namespace into other namespaces, then I’ve increased the votes here. If you want to read further check this Wikipedia site.
When programming object-oriented we should follow the SOLID principles. If one didn’t completely do that and I fixed it accordingly, I’ve voted to one of these categories.
If a bug came over and over as a regression bug, I’ve introduced a unit-test for it. In fact I try to add a unit test for every bug I need to fix as I find those unit tests the most useful. Anyway, in these cases I’ve selected the corresponding category
Few years ago I started to try to increase my coding quality by applying static code analysis tools like Microsoft Code Analysis and NDepend. Unfortunately not every project size can afford to have NDepend applied to it. I’ve read the clean code book. Applied further design patterns. Tested even heavier. Added more unit tests reaching high coverage. Started to develop test driven. Applied better architecture and so on…
I’ve collected a lot of weapons against bugs. Not only technical but also interpersonal skills. In fact I’m aware, that these are the most usual causes of bugs and problems with the software. In my last measurement it was around 50%. This is however not the scope of this post – I don’t want to talk about “Process and Concept” problems or noises in the communication starting from the customer along the way to the developers – this should be the topic of requirement engineering and other software development methodologies. How did I separate this from the above categories? I said to myself, that if during the solution of the bug, “tons” of files were changed, or a “lot” of new code was written, then I categorized it as a concept bug. But as I said already, this is out of the scope of this post. What I want to cover is the following question.
Which development mistakes lead more often to bugs?
The poll above doesn’t give you an answer to that. It just tells you, what types of behaviors get fixed more often. For example the amount of bug fixes containing the removal of duplicated code doesn’t point out the bugs, where it wasn’t recognized as a source of it or where the solution of the problem was not to remove the duplicate. Still I believe, this gives you a good hint, where to give more focus, for that reason, it is part of my definition of done.
Is this question to be asked at all? We should do everything correctly according to every rules of the development, right? I guess in many cases it is not possible – in time, in budget. Probably most of the developers met already the “big ball of mud” styled worse “spaghetti” code ever. I did it many times. Many times I wrote it myself – sad again, but my “past me” didn’t know what my “future me” can write better. Still we wanted to satisfy the customer needs, many times successfully.
I don’t know which categorization is the best related to this question. I created this one when I’ve been working in a really high quality web application. I based on the check-ins related to the bugs (using Team Foundation Server) and look at what actions are taken when fixing it. I’ve created these categories as a starting point and I hope you will be happy to contribute and see the results.
I’ve just started to experiment with the Google App Engine (GAE) using Java Servlets. To tell the truth I also wanted to play a little bit with the technology. The application is supposed to transform an XML with the help of an XSLT file. Has some authentication and authorization implemented. Reads from blob storage and google drive, sends emails and so on.
I deployed many times during the development and everything was working. Then I started to use xsl:params with the help of the transformer.setParameter method. Locally – of course – everything worked fine the whole time. After deploying it I got a 500 error. Checked the logs: java.lang.NoClassDefFoundError: com.sun.org.apache.xalan.internal.xsltc.runtime.BasisLibrary is a restricted class. Please see the Google App Engine developer’s guide for more details.
Fine I thought, and what can I do about it? This was library used by the Transformer internally. I find it really strange, that google restricts such classes. Anyway I ended up implementing this functionality on my own replacing the parameters as Strings. Nice hack. By the way you should use xsl:variable instead of params if you wanna make it work…
After that I had another problem: ConfigurationException: Translet class loaded, but unable to create translet instance.
Oh my god… I thought. But luckily I’ve found a solution here.
From here you have to go to the Xalan download area. Interestingly enough the library didn’t change since 2007.