Dave's blog.

  • Mind your dependencies, or, Where's your cookie popup?
  • My first open source PR!
  • Refactoring legacy code: a case study.
  • Product review: Ikea Bekant sit/stand desk (manual version)
  • Bash: create a new timestamped folder and tell me what it is
  • Book review: Release It! by Michael Nygard
  • Apply custom policy-based rate limiting to Azure APIs using Azure API Management
  • Productivity: bash aliases
  • Working in a dev bubble(2): from labour camps to youth clubs
  • Book review: Pro ASP.NET Core MVC 2 by Adam Freeman
  • Working in a dev bubble(1): complexity
  • Git for beginners
  • Angular 2 demo at the Angular Meetup on Tuesday
  • New year new tech
  • YADB : yawn?
  • Back to the top

    Mind your dependencies, or, Where's your cookie popup?

    It's a relief when I load this page to not have to deal with an annoying cookie permission modal.

    I wish the EU had considered how their policy might be interpreted and mandated that a global 'apply the minimum' policy could be applied by the user and respected by the site.

    Anyway I came across requestmap.webperf.tools which visualises the dependencies in a website, ie, which other libraries the site depends upon. Which makes for interesting viewing.

    This site is fairly simple, as there is no cookie handling, just a call to a CDN for the Bootstrap library

    Tool output

    For an organisation that, in the UK at least, is supposed to not advertise, the BBC has a lot of adservers in play.

    Tool output

    Ah, but wait, that was for a test run on an non-UK server. Let's re-run that from a UK server and see how much cleaner it looks.

    Tool output

    Amazon keeps a lot of it in-house, or in-network at least.

    Tool output

    The prize for the busiest map of today goes to ZDnet for this rat's nest:

    Tool output

    Read more on the tool at Simon Hearne's blog and wonder whether the demise of the 56k modem is a good or a bad thing.

    Back to the top

    My first open source PR!

    OK so it's only a documentation change but documentation is important.

    I've been playing with create-react-blog and couldn't get it to work like the README suggested when adding new articles to the folder (the app reads the folders to compose the list of articles).

    It looks like somewhere along the way the naming conventions changed.

    The PR being merged at least confirmed that I wasn't doing something odd and might save the next person 15 minutes of head-scratching.

    Back to the top

    Refactoring legacy code: a case study.

    Often refactoring examples are fairly straightforward, leaving you wondering what to do with the difficult cases of legacy code.

    This process took me some ten commits and a couple of hours but it's worth looking at the steps to see a more complex example.

    Starting point

    We have a legacy class which downloads a file from Sharepoint.

    There are no tests which is something I wanted to remedy. To do that we need to add interfaces around the Sharepoint objects as they are near-enough impossible to mock having constructors which take Sharepoint context objects with constructors like:

    public Folder(ClientRuntimeContext context, ObjectPath objectPath);

    where ClientRuntimeContext is :

    protected ClientRuntimeContext(string webFullUrl);

    and so on, which means that the only (or at least one) sensible option is to wrap this class (in fact a number of classes) in Adapter pattern implementations.

    In full, the starting point for this exercise:

    private static void DownloadFile(ClientContext clientContext, File file, string fileName)
      var maxAttempts = 10;
      for (var attemptNumber = 1; attemptNumber < maxAttempts + 1; attemptNumber++)
          using (var fileStream = System.IO.File.Create(fileName))
            var stream = file.OpenBinaryStream();
          attemptNumber = maxAttempts + 1;
        catch (AggregateException ex) when (ex.Message.Contains("(503) Service Unavailable"))
            $"   >> Getting file {fileName}: 503 returned. Waiting before call number {attemptNumber + 1} of {maxAttempts} attempts.");
          Thread.Sleep(350 + 200 * attemptNumber);

    OK, so here is what I did, with at each step the commit message I used.

    Back to the top

    Commit 1 : Using R# refactored the to-be-wrapped code into a new method

    Using Resharper I extracted out one of the method calls that use an object that I will need to mock to add some tests. Purely using Resharper here, as I do at almost every stage until I am able to write some tests.

    Back to the top

    Commit 2 : Added an interface for the refactored method and a wrapper class to apply the Adapter pattern to (but not used yet)

    I added a class which implements the Adapter pattern to wrap the problematic ClientContext. I then extracted an interface for it so that I can mock it later on.

    I also moved the call to LoadSharepointFile as it does not use the fileStream object so does not need to be in the using statement.

    Back to the top

    Commit 3 : Wrap the concrete Sharepoint object in the new adapter

    I use the new ClientContextWrapper (adapter) to wrap the clientContext and then update LoadSharepointFile to accept an object which implements the new IClientContext interface I added in the last commit.

    Ideally the tests would come first, to protect the refactoring. In reality the code cannot easily support this, which is why I had to follow this process. However I can test empirically that the code is not being broken by these changes as I can run the application to assert that the files are still being downloaded as ever they were.

    That's not ideal, I agree, but to get the code into a testable state so that it is protected from changes sometimes you need to be pragmatic about how you get it there.

    Back to the top

    Commit 4 : Remove the redundant method and call the interface method

    Now I call the method on the extracted interface (implemented through the new ClientContextWrapper).

    Back to the top

    Commit 5 : Add a second method to the interface and move the wrapper creation up so that the method to be tested does not rely on the concrete Sharepoint type

    DownloadFile method is now accepting an IClientContext, taking it closer to being testable.

    Back to the top

    Commit 6 : Move the instantation of the FileStream to where it is needed

    Here I limit the scope of the using block to what is necessary. I think I could have used Resharper but some times you can just eyeball things. Yes, that's OK. The commit is small enough to be able to look back and think "Yeah, that was fine" .

    Back to the top

    Commit 7 : Extract CreateFile into a new class; extract interface and use this in the code. DownloadFile has 4 params but we can live with that for the moment

    Moving a bit quicker in this commit. I extract a new method, CreateFileFromSharepointClientResult, and then extract that into a new class, FileCreator, and then extract an interface for it, IFileCreator.

    In the calling method, create a FileCreator, which DownloadFile will accept as an IFileCreator. We can move these instantiations up the chain later but for just now this is fine and keeps the commits simple.

    Back to the top

    Commit 8 : Extracted ISharepointFile which is used in a call to IClientContext but there is still a dependency on ClientContext which we cannot instantiate in the tests. Hmm

    Similar to commit 7, I extracted an interface for a new adapter around the Sharepoint File object.

    In the calling method, create a SharepointFileAdapter, as DownloadFile will accept as an ISharepointFile now.

    Back to the top

    Commit 9 : Updated the wrapper to use its stored ClientContext so that the interface can lose its dependency on the Sharepoint ClientObject type in favour of an ISharepointFile which can be mocked

    IClientContext now loads a ISharepointFile, breaking the dependency on a Sharepoint ClientObject.

    In DownloadFile I can now pass in the ISharepointFile, tidying this up.

    Back to the top

    Commit 10 : Quick tidyup

    Now, finally, with the removal of the Load method from IClientContext, I have removed the dependencies on the Sharepoint types, in favour of interfaces.

    So, at last, I can write some tests to protect this code.

    Back to the top

    Commit 11 : Added tests for DownloadFile and its collaborators

    Firstly a test for the happy path, when Sharepoint is playing nicely.

    Experience has shown us that it can return a 503 so there is a retry mechanism in place. Now I test the case when a 503 is thrown in the first Sharepoint call.

    Which uses these helper methods:

    Now I test the case when a 503 is thrown in the second Sharepoint call.

    Tests, at last

    There are a few more commits (which I've omitted here as I think that this is long enough already) where the code is tidied up further, now that I have the security of the unit tests in place: renamed, reduce the number of parms to a method, move the classes & interfaces into their own files.

    The code is not yet perfect as you can see by the complexity of the unit tests: there is still a lot going on, but that's a matter for another day.

    I hope that this gives somebody a helpful view point of taking some slightly messy code whic some awkward dependencies and shows how it can be refactored safely (almost entirely using Resharper to refactor) into a state where it can be protected by unit tests.

    Back to the top

    Product review: Ikea Bekant sit/stand desk(manual version).

    I've had this desk for about 4 months now so it feels time for a write-up.

    You don't have to look very hard to find people slating this desk which put me off buying it. There is a standing desk at work but I never tried it as I didn't want to use a laptop as that feels like it's negating the ergonomic/postural benefit of standing up. I think a number of the negative reviews relate to the motor. I didn't want to pay for a motorised version and as there is an Ikea nearby I could try out how manually raising & lowering felt. Given that you're going to want a standing desk as you feel you're not getting enough movement into your day then the effort of raising the desk is an added plus, I think.

    So how long does it take?

    It's not hard work and it takes about 57 seconds to raise it up to the level you can see below. I can't raise it much higher due to the book shelf but it's high enough for me.

    Do I use it?

    Yes. I know that there is varying thoughts on whether it is a good idea to go from sitting all day to standing all day but I tend to stand up for maybe 5 hours. I find a side benefit is that I tend to move more anyway when it's raised. I think the point of the Standing Desk movement (if you can call it that ) wasn't to stand as such but to not be immobile. I think that is easier to remember to keep mobile when you are standing as it's not as comfy as sitting (if it is you need a new chair!)

    Some reviews mentioned it would not take weight reliably. Well I have three monitors at about 6kg each and I often have a stack of dev books on there and when raising it and lowering it there is no problem that I've seen.

    I also like that it is 80cm deep so that even with my monitor stand, which pushes the screens forward a little further than I'd like, I can get a 60cm gap between my eyes and the screen.

    Here it is in the seating position...

    ...And some 57 seconds later in the standing position...


    Back to the top

    Bash: create a new timestamped folder and tell me what it is

    mkdir my.name_$(date +%Y_%m-%d-%H-%M-%S); ls |grep _$(date +%Y_%m-%d) | sort -r | head -1

    Back to the top

    Book review: Release It! Second Edition by Michael Nygard

    This great book covers a variety of topics in the space between the code being built and the user interacting with it.

    It covers four slices of the journey:

    • Creating Stability
    • Designing for Production
    • Delivering the System
    • Solving Systemic Problems

    Each section starts with a case study, or war story, from the author's experience, which serves to illustrate what can go wrong and how quickly. It also gives some pointers of the sort of factors to think about when considering on how to go about approaching them.

    Some key messages I took from the book:

    • It's often more important for your team to be flexible rather than efficient. In order to respond quickly to issues in a deployed system or to address a new requirement quickly you will need to move through the development process quickly. Having a team that can code-test-build-deploy without handoffs to other teams can speed this up.

      My current team is unusual in my organisation in that we can operate independently as our system sits off to one side from the main platform. We can go from a bug fix being committed to git on a dev's machine to the fix being in Production in 30 mins. While it may on the face of it be more efficient to have specialists in different disciplines (having a team to manage the server/cloud and the deployments is the commonest example I guess) this will come at the expense of flexibility when you may need it the most. "A container ship trades efficiency for flexibility".

    • Your development eco-system should be treated as a Production environment. This is a frustration that I have seen at most places. Internal package feeds should be able to deliver the packages requested. Build agents should be available and kept up-to-date with the requirements needed to build the software. Machines, be they servers/VMs or developer laptops should be up to the job. AzureDevops should be up (recently that's been a bot off the mark....). And more importantly an outage in this Production environment should be treated with the seriousness of one in the customer-facing one.

    In a bit more detail here is what I took from the sections

    Creating Stability:

    Plan for failures. Use CircuitBreakers. Build in Crumple Zones. Couple loosely.

    "A robust system keeps processing transactions, even when transient impulses, persistent stresses, or component failures disrupt normal processing. "

    For every I/O call ask "What are the ways this can go wrong?"

    Don't trust client libraries to handle connections cleanly.

    Blocked threads are the main cause of responsiveness issues.

    Use two-factor monitoring,ie, in addition to internal monitoring, monitor responsiveness from the outside to capture the user experience.

    Make domain objects immutable.

    Cache carefully: don't cache data that is cheap to get. "Keeping something in cache is a bet that the cost of generating it once, plus the cost of hashing and lookups, is less than the cost of generating it every time it's needed."

    Stagger your cron jobs to avoid an avalanche at 0001 hours.

    There are a number of patterns & anti-patterns for system stability.

    Automate what you can but build in limits so that you don't end up automating your operation to a halt.

    Designing for Production

    SiteScope can be used to simulate a customer base's traffic.

    You can recover much more quickly if you can restart components/services rather than whole servers. Remember that building up the cache can be what delays the restart, or rather the time until which the service becomes useful.

    Beware of the differences in internal clock times between servers. Use a NTP server instead.

    It's hard to debug in a container; log to an external target.

    Manage your dependencies; don't download from nuget straight into Production.

    Log widely to give transparency: "Debugging a transparent system is vastly easier, so transparent systems will mature faster than opaque ones"

    When your system is overloaded you need a method to shed the load, to try to help you recover. You need to be able to do this early on in the request handling pipeline, not after it's consumed a lot of resources.

    He introduces the idea that monitoring isn't just about system health, in that we want a healthy system. It's also about the financial health of the organisation:.

    • "We should build our transparency in terms of revealing the way that the recent past, current state and future state connect to revenue and costs"
    • Check the queue length as a non-zero queue means something is slow. That is a potential loss of revenue.

    Canary deployments (to a susbet of your VMs/instances) limit the risk of a deployment.

    Security: treat "Unauthorised (403) as Not Found (404)" to prevent an attacker finding a door to break into. It's easier to break into something if you know there is a locked door rather than a wall.

    Delivering the System

    He speaks a lot about version handling and how they relate to upgrades, eg, of schemas and of document structure in NoSql databases.

    Solving Systemic Problems

    Make sure QA are testing to warranty stability and functionality in Production, not just to work in a much more limited QA environment.

    Code adaption: be prepared to retire the B in an A/B test. Don't starve A of resources in attempting to make B worthwhile.

    Microservices: use with care. You probably don't need them if you don't have scalability concerns as the debugging overhead can be considerable.

    Chaos engineering: if you dashboard is all green then your monitoring tools aren't good enough as something somewhere will be below par.


    Back to the top

    Apply custom policy-based rate limiting to Azure APIs using Azure API Management

    While the Azure docs lead me think that if you are a Cloud Solution Provider (CSP) partner there is a way to limit API calls by CustomerId that is not available where you issue subscriptions to users.

    In our case each customer can have as many users as they wish. Each user has a subscription to an Azure Product (eg the Unlimited product).

    Just because we give access to the Unlimited product doesn't mean we want an unlimited flood of requests.

    What I needed to do was limit all the users for a company as a job lot to a given number of requests per minute.

    Limiting by subscription (ie, API key) is documented but we need a higher level grouping to rate-limit on. This does not seem to be available so I have done it this way:

    • Created three users. All users by default (and this can't be changed) are put into Developers. I also created two groups, Customer1 and Customer2 and put two users into the first and one into the second.
    • Go API> All APIs >Inbound Processing > rate-limit-by-key

      This will let you edit the default policy on the APIs

    • Edit the policy

      It's C# 7 syntax but there can be only one policy per product or API. So I've applied a limit on 12 requests in a 60 second period, with the count being incremented by the name of group where the group name starts with 'Customer'. This is to distinguish it from say the Developers group which treats all customers' users the same

      Doing this will keep track of the count of requests by customer no matter how many users they have, as long as each customer's users are correctly assigned to their group.

    • To see the policy in action (and how it processed) you'll need to use the API inspector.

      Go API> All APIs > Echo API > Test > GET

      This will let you test the default API

    • Click Send and then in the Http Response click on Trace to see how the request was processed.

      Aside from the headers and requests parms as you would expect you will also see the rate-limit-by-key processing result

    • The one thing that is not ideal is that I cannot see how to test this as one of the test users as you cannot override the subscription being used for the test request. In this case I added my own user to the Customer1 group to see how it is processed.

      It does work as I can send requests from user 1 (who is in the Customer1 group) until the API returns a 429. At this point requests from user 2 (who is also in the Customer1 group) elicits a 429 but user 3 (who is in the Customer2 group) gets a 200.


    It's got some documentation but not too much covering this particularly niche application.

    More? See also policy samples , policies , throttling and a more complex expression example.

    Back to the top

    Productivity: bash aliases

    A lot of my colleagues use various visual git clients. When I migrated our source code from Vault to git in 2017 my CTO advised me strongly to use bash instead of visual tools.

    I took his advice and now I advise all new starters to do so.

    While I think diffs are hard to read in bash (I know you can set a diff tool but there is one in VS) and it is easier to right-click files > Stage in the VS Team Explorer than add files in bash.

    However I prefer to use bash for pretty much everything else git-related. The use of bash, and getting used to the shortcuts (that's a fairly large jump for most people coming from a Windows background) means that I've become happy with it such that I want to explore what more I can do to speed up desktop activity.

    I have specified some shortcuts of my own into bash by adding them to my .bashrc file and asking bash to use it. (See how to get started with a .bashrc.)

    Here is part of my .bashrc file. This is stored in my Windows Users root folder (eg, C:\Users\dave\.bashrc)

    alias gmm='git checkout master;git pull; git checkout -; git merge master'
    I use this all the time. I've committed my code and tests. I've pushed to my branch so my work is secure. Now I want to pull in the latest from master. Typing gmm is much easier than (in VS) moving to a new branch; letting VS reload the project; pulling down; letting VS reload the project; then moving back to my branch and so on.

    alias gc='git checkout '
    You can chain these aliases together. Let's say I want to throw away what's in my branch and update with the latest master. I just do this:

    gc .;gmm
    Because my aliases don't end in semi-colons they accept parameters. In this case the dot says "undo all the changed files", so the whole command thorws away the changes and then refreshes the branch with the latest code on the master branch on the remote repo.

    alias gl2='git log -2 '
    lets me see the last two changes That's often all I want to see. Those are changes by anyone. What if I want to see just mine?

    alias glmine='git log --author=dave'
    lets me see my changes.

    glmine -2
    lets me see my last two changes.

    alias gst='git status '
    saves 7 key strokes per check. Do the maths!.

    Further from git

    alias aproj='cd U:; cd Users/dave/Documents/a_proj'
    switches my working folder to my projects folder.

    Getting started with a .bashrc

    To get started get my bashrc file and save to your root folder as .bashrc. Then run (in bash, replacing dave with your username) to load the aliases into bash:

    source C:/Users/dave/.bashrc

    Back to the top

    Working in a dev bubble(2): from labour camps to youth clubs

    While I've seen C# salaries rise in Manchester quicker than inflation since 2011 employers also need to think about the non-cash benefits to try to hook in candidates.

    It's been a long time since a catering size can of Gold Blend was sufficient.

    On-site gym, fruit wall, weekly artisan coffee blends, free meals, Beer Friday, the list goes on.

    One firm even have a need for a person to help them select the cheese for their monthly cheese fest to go along with their monthly at-the-desk Gin Trolley and Beer Tap (for when a Beer Fridge just isn't enough).

    So far so exciting and so merry.

    At the risk of sounding like a Scrooge, what is the impact on productivity on such enticing diversions?

    MonkeyUser's Focus cartoon reminds me of part of the Joel Test.

    "8. Do programmers have quiet working conditions?
    There are extensively documented productivity gains provided by giving knowledge workers space, quiet, and privacy. The classic software management book Peopleware documents these productivity benefits extensively."

    Like MonkeyUser Joel discusses the challenge of getting into "the zone" which is hard if the need to generate a buzz and excitement in the workplace turns it into something approximating a youth club. While no one wants to end up that the other end of the spectrum, a workplace resembling a North Korean labour camp, there is a balance to be made.

    I once worked somewhere where, in an office of three people, about ten words were said (not just to me but to anyone) all day. I had focus, but also a sense of alienation so there is a balance to be struck if we are to maintain a level of productivity and not just be in hock to a need to keep team members entertained in what can seem a like a race to the bottom of productivity in order to fill developer chairs.

    My concern is that the attention is focused on HR-driven goodies. In my experience many developers respond more stongly to a solid work process where they get well-qualified work, which challenges them, in realistic timescales, and which will actually see the light of day in Production.

    While beer is more easily obtained that a quality process and clear road map, free beer on a Friday can only go so far to distract from a chaotic stream of context-switching from fire to fire.

    Back to the top

    Book review: Pro ASP.NET Core MVC 2 by Adam Freeman


    It's one of those books you can weigh on the bathroom scales rather than the kitchen ones. However that's all good....

    I've been mostly a back end dev over the last 6 years so have neglected ASP.net MVC. I'm OK with that in one way as it means that during the last three roles I have been focusing on what the company needs me to do to achieve their goals. As they didn't need MVC then me filling my brain with it is a bit of a distraction (when it could be filled with Knockout, say).

    However now that it's on .net core 2 I thought it was time to dip back in and have worked through most of this tome from Adam Freeman

    Firstly the size. I once bought a similarly-long book from Apress on C# (4, I think) by Troelsen on my Kindle and it was unmanageable (it was one of the older Kindles with a quarter-second delay to flick back a page).

    I was worried that holding a 1000-page book would also be hard. Well, it is a little tiring if you hold it up, so I tended to read it on the desk.

    The reason it is so large is that the code excerpts are almost all complete. What I mean is that they aren't full of ellipses asking you to refer back 2 pages for this part and 5 pages for the next part. This approach, that some other publishers take, saves weight, trees and forearms, I get that, but it breaks the flow (like the quarter-page delay on the page-back on my Kindle did) and just like in work so in reading technical matter focus (or flow if you like) is important.

    Whether he covers everything he should or not I can't tell as it's not my home field. However I can see the differences from the last version I touched, which was ASP.net MVC 4. It's a lot nicer to work with.

    I was also able to get a moderately involved starter project going so it met my needs.


    Back to the top

    Working in a dev bubble(1): complexity

    Manchester is a dev skills bubble just now, no doubt about that.

    In the 7 years I have been here the top level advertised salary for a senior C# dev has risen from about £35-40k to about £60-£70k. Obviously some of those are speculative CV-harvesting operations, but from discussions I have had there are clearly a lot more jobs around, and better paid ones at that.

    Especially given that inflation alone would have seen that 2011 range rise to £41k to £47k.

    While that is great news for me as a seller of technical skills, it also presents some challenges as someone who leads other people to deliver the work that I have to. How so? Well, when I was recruiting developers in my Manchester role in 2012 we could be a bit choosy about how often a candidate switched jobs. We could worry that she or he may not stay long enough to get to speed and earn their salary. In fact in my second role we anticipated that, given the complexity of the system, if someone was fully effective in six months that was a good result.

    The problem is that in today's hot market good devs can be tempted away after not so long in a role. They can afford to be fickle. Because other employers are so (let's not say "desparate") "keen" to fill vacancies, people can leave after a relatively short time in their current role. If we want to fill empty seats we cannot be as choosy as once we (or at least my then-manager and I) were.

    The flip-side is that as a purchaser of skills I cannot afford to let someone take 12 months to earn their salary, as if they leave within two years (which seems about a 20%-33% chance at the moment) then we will not have covered our costs.

    The only way to mitigate this is to structure systems more simply.

    Sure, we've always wanted to do that haven't we?

    Yes, but to add to the message pressing for this: ease of maintenance, ability to rotate staff through, easier debugging, we now need to add the economic reality that if we can't get staff up to speed quickly then we're going to risk losing money on employing them.

    So more than ever the classic messages of decoupled architectures and TDD have a key role to play to onboarding team members and getting them productive.

    In my first .net role in 2006 it was six weeks before I worked on code that made it into the Production release. This week a new starter (with 2.5 years in the industry) had written code (and unit tests) which was in Prod by the end of his second day. The level of test coverage gives confidence that the change is safe and the new starter can start to earn their salary straight away.

    While succession planning helps to mitigate the impact of the departures and compnay culture should help to prevent, structuring the code so that devs are productive from the get-go is crucial.

    Back to the top

    Git for beginners

    I've added some info to our internal Confluence wiki for new devs who don't have a git background. That group included me a year ago so I recall how much of a mind shift it is from a centralised VCS.

    Here are my notes/links for future ref (as much by me as anyone else).


    Once you have set up your account on BitBucket, follow the steps on Bitbicket (dropping down "Set up SSH for Windows") to step up SSH. I would stop at step 6 as at that point you can clone the repos.

    Configure git

    Mandatory. In git bash:

    • git config --global user.email [your email]
    • git config --global user.name [your name]


    • Set your favourite mergetool for merging: git config --global merge.tool "winmerge". You can see a list of which suitable tools you currently have installed by running: git mergetool --tool-help

    • Create an alias for a customised version of git log (usage : git lg): git config --global alias.lg "log --pretty=' %Cred%h%Creset | %C(yellow)%d%Creset %s %Cgreen(%cr)%Creset %Ccyan%Creset' --graph"

    • Improves git diff tool: git config --global diff.algorithm histogram

    • Make merge issues show the base file before conflicts as well as the actual conflicts: git config --global merge.conflictStyle diff3

    • Configure Git-Bash (if you intend on using it) to point to your local repository file path by default each time it loads up

    • Right click on the Git Bash.exe shortcut and click Properties.

    • In the Shortcut Tab, enter your local Git repository path in the "Start In" textbox. For example, "C:\Git" without the quotation marks

    • If present, remove the "--cd-to-home" text from the Target path as this effectively overrides the 'Start In' path

    Learning git

    There are two tools to help you get to grips with branching (with live testbeds):

    Back to the top

    Angular 2 demo at the Angular Meetup on Tuesday

    A busy week. Starting a new job on Tuesday morning and then in the evening I'm demoing where I have got up to with my Angular 2 (now 4) side project at the Angular Workshop at MadLab.

    To fuel my learning I've mostly been using the Angular 2 book from Fullstack.io. This is a subscription-model book which is a little eccentric, but it is updated regularly. Given that ng moves so quickly you can waste a lot of time watching videos that are even a few months old.

    It's exciting that tech moves so quickly now as productivity is booming but it's often hard to make the time to keep up on what is not your crusting-earning tech stack.

    Back to the top

    New year new tech

    Last year I was learning Node and have built up a bit of a backend for a side project. Now it's onto the front end, in Angular 2.

    Back to the top

    YADB : yawn?

    Yet another development blog? Yawn.

    Well, we're now recruiting again and I like to see evidence of work/thoughts/commitment from candidates. I think I should be able to bear up to the same examination so here goes.

    I accept that not everyone can find the time when balancing work, commuting and domestic commitments to work on a side project sufficient to impress an prospective employer but I think that there is a lower bar to entry for a blog.

    And as it is we've had a majorly hectic year with a conversion of a Silverlight app to a shiny new HTML5/Knockout SPA so I recognise that the time and mental clarity to come home and write ace code after dinner may not always be there if you are putting in the hours.

    Anyway let's see how this goes.