YADB

Dave's blog.

  • Book review: Release It! by Michael Nygard
  • Apply custom policy-based rate limiting to Azure APIs using Azure API Management
  • Productivity: bash aliases
  • Working in a dev bubble(2): from labour camps to youth clubs
  • Book review: Pro ASP.NET Core MVC 2 by Adam Freeman
  • Working in a dev bubble(1): complexity
  • Git for beginners
  • Angular 2 demo at the Angular Meetup on Tuesday
  • New year new tech
  • YADB : yawn?
  • Book review: Release It! Second Edition by Michael Nygard

    This great book covers a variety of topics in the space between the code being built and the user interacting with it.

    It covers four slices of the journey:

    • Creating Stability
    • Designing for Production
    • Delivering the System
    • Solving Systemic Problems

    Each section starts with a case study, or war story, from the author's experience, which serves to illustrate what can go wrong and how quickly. It also gives some pointers of the sort of factors to think about when considering on how to go about approaching them.

    Some key messages I took from the book:

    • It's often more important for your team to be flexible rather than efficient. In order to respond quickly to issues in a deployed system or to address a new requirement quickly you will need to move through the development process quickly. Having a team that can code-test-build-deploy without handoffs to other teams can speed this up.

      My current team is unusual in my organisation in that we can operate independently as our system sits off to one side from the main platform. We can go from a bug fix being committed to git on a dev's machine to the fix being in Production in 30 mins. While it may on the face of it be more efficient to have specialists in different disciplines (having a team to manage the server/cloud and the deployments is the commonest example I guess) this will come at the expense of flexibility when you may need it the most. "A container ship trades efficiency for flexibility".

    • Your development eco-system should be treated as a Production environment. This is a frustration that I have seen at most places. Internal package feeds should be able to deliver the packages requested. Build agents should be available and kept up-to-date with the requirements needed to build the software. Machines, be they servers/VMs or developer laptops should be up to the job. AzureDevops should be up (recently that's been a bot off the mark....). And more importantly an outage in this Production environment should be treated with the seriousness of one in the customer-facing one.

    In a bit more detail here is what I took from the sections

    Creating Stability:

    Plan for failures. Use CircuitBreakers. Build in Crumple Zones. Couple loosely.

    "A robust system keeps processing transactions, even when transient impulses, persistent stresses, or component failures disrupt normal processing. "

    For every I/O call ask "What are the ways this can go wrong?"

    Don't trust client libraries to handle connections cleanly.

    Blocked threads are the main cause of responsiveness issues.

    Use two-factor monitoring,ie, in addition to internal monitoring, monitor responsiveness from the outside to capture the user experience.

    Make domain objects immutable.

    Cache carefully: don't cache data that is cheap to get. "Keeping something in cache is a bet that the cost of generating it once, plus the cost of hashing and lookups, is less than the cost of generating it every time it's needed."

    Stagger your cron jobs to avoid an avalanche at 0001 hours.

    There are a number of patterns & anti-patterns for system stability.

    Automate what you can but build in limits so that you don't end up automating your operation to a halt.

    Designing for Production

    SiteScope can be used to simulate a customer base's traffic.

    You can recover much more quickly if you can restart components/services rather than whole servers. Remember that building up the cache can be what delays the restart, or rather the time until which the service becomes useful.

    Beware of the differences in internal clock times between servers. Use a NTP server instead.

    It's hard to debug in a container; log to an external target.

    Manage your dependencies; don't download from nuget straight into Production.

    Log widely to give transparency: "Debugging a transparent system is vastly easier, so transparent systems will mature faster than opaque ones"

    When your system is overloaded you need a method to shed the load, to try to help you recover. You need to be able to do this early on in the request handling pipeline, not after it's consumed a lot of resources.

    He introduces the idea that monitoring isn't just about system health, in that we want a healthy system. It's also about the financial health of the organisation:.

    • "We should build our transparency in terms of revealing the way that the recent past, current state and future state connect to revenue and costs"
    • Check the queue length as a non-zero queue means something is slow. That is a potential loss of revenue.

    Canary deployments (to a susbet of your VMs/instances) limit the risk of a deployment.

    Security: treat "Unauthorised (403) as Not Found (404)" to prevent an attacker finding a door to break into. It's easier to break into something if you know there is a locked door rather than a wall.

    Delivering the System

    He speaks a lot about version handling and how they relate to upgrades, eg, of schemas and of document structure in NoSql databases.

    Solving Systemic Problems

    Make sure QA are testing to warranty stability and functionality in Production, not just to work in a much more limited QA environment.

    Code adaption: be prepared to retire the B in an A/B test. Don't starve A of resources in attempting to make B worthwhile.

    Microservices: use with care. You probably don't need them if you don't have scalability concerns as the debugging overhead can be considerable.

    Chaos engineering: if you dashboard is all green then your monitoring tools aren't good enough as something somewhere will be below par.

    Recommended.

    Apply custom policy-based rate limiting to Azure APIs using Azure API Management

    While the Azure docs lead me think that if you are a Cloud Solution Provider (CSP) partner there is a way to limit API calls by CustomerId that is not available where you issue subscriptions to users.

    In our case each customer can have as many users as they wish. Each user has a subscription to an Azure Product (eg the Unlimited product).

    Just because we give access to the Unlimited product doesn't mean we want an unlimited flood of requests.

    What I needed to do was limit all the users for a company as a job lot to a given number of requests per minute.

    Limiting by subscription (ie, API key) is documented but we need a higher level grouping to rate-limit on. This does not seem to be available so I have done it this way:

    • Created three users. All users by default (and this can't be changed) are put into Developers. I also created two groups, Customer1 and Customer2 and put two users into the first and one into the second.
    • Go API> All APIs >Inbound Processing > rate-limit-by-key

      This will let you edit the default policy on the APIs

    • Edit the policy

      It's C# 7 syntax but there can be only one policy per product or API. So I've applied a limit on 12 requests in a 60 second period, with the count being incremented by the name of group where the group name starts with 'Customer'. This is to distinguish it from say the Developers group which treats all customers' users the same

      Doing this will keep track of the count of requests by customer no matter how many users they have, as long as each customer's users are correctly assigned to their group.

    • To see the policy in action (and how it processed) you'll need to use the API inspector.

      Go API> All APIs > Echo API > Test > GET

      This will let you test the default API

    • Click Send and then in the Http Response click on Trace to see how the request was processed.

      Aside from the headers and requests parms as you would expect you will also see the rate-limit-by-key processing result

    • The one thing that is not ideal is that I cannot see how to test this as one of the test users as you cannot override the subscription being used for the test request. In this case I added my own user to the Customer1 group to see how it is processed.

      It does work as I can send requests from user 1 (who is in the Customer1 group) until the API returns a 429. At this point requests from user 2 (who is also in the Customer1 group) elicits a 429 but user 3 (who is in the Customer2 group) gets a 200.

      Lovely.

    It's got some documentation but not too much covering this particularly niche application.

    More? See also policy samples , policies , throttling and a more complex expression example.

    Productivity: bash aliases

    A lot of my colleagues use various visual git clients. When I migrated our source code from Vault to git in 2017 my CTO advised me strongly to use bash instead of visual tools.

    I took his advice and now I advise all new starters to do so.

    While I think diffs are hard to read in bash (I know you can set a diff tool but there is one in VS) and it is easier to right-click files > Stage in the VS Team Explorer than add files in bash.

    However I prefer to use bash for pretty much everything else git-related. The use of bash, and getting used to the shortcuts (that's a fairly large jump for most people coming from a Windows background) means that I've become happy with it such that I want to explore what more I can do to speed up desktop activity.

    I have specified some shortcuts of my own into bash by adding them to my .bashrc file and asking bash to use it. (See how to get started with a .bashrc.)

    Here is part of my .bashrc file. This is stored in my Windows Users root folder (eg, C:\Users\dave\.bashrc)

    alias gmm='git checkout master;git pull; git checkout -; git merge master'
    I use this all the time. I've committed my code and tests. I've pushed to my branch so my work is secure. Now I want to pull in the latest from master. Typing gmm is much easier than (in VS) moving to a new branch; letting VS reload the project; pulling down; letting VS reload the project; then moving back to my branch and so on.

    alias gc='git checkout '
    You can chain these aliases together. Let's say I want to throw away what's in my branch and update with the latest master. I just do this:

    gc .;gmm
    Because my aliases don't end in semi-colons they accept parameters. In this case the dot says "undo all the changed files", so the whole command thorws away the changes and then refreshes the branch with the latest code on the master branch on the remote repo.

    alias gl2='git log -2 '
    lets me see the last two changes That's often all I want to see. Those are changes by anyone. What if I want to see just mine?

    alias glmine='git log --author=dave'
    lets me see my changes.

    glmine -2
    lets me see my last two changes.

    alias gst='git status '
    saves 7 key strokes per check. Do the maths!.

    Further from git

    alias aproj='cd U:; cd Users/dave/Documents/a_proj'
    switches my working folder to my projects folder.

    Getting started with a .bashrc

    To get started get my bashrc file and save to your root folder as .bashrc. Then run (in bash, replacing dave with your username) to load the aliases into bash:

    source C:/Users/dave/.bashrc

    Working in a dev bubble(2): from labour camps to youth clubs

    While I've seen C# salaries rise in Manchester quicker than inflation since 2011 employers also need to think about the non-cash benefits to try to hook in candidates.

    It's been a long time since a catering size can of Gold Blend was sufficient.

    On-site gym, fruit wall, weekly artisan coffee blends, free meals, Beer Friday, the list goes on.

    One firm even have a need for a person to help them select the cheese for their monthly cheese fest to go along with their monthly at-the-desk Gin Trolley and Beer Tap (for when a Beer Fridge just isn't enough).

    So far so exciting and so merry.

    At the risk of sounding like a Scrooge, what is the impact on productivity on such enticing diversions?

    MonkeyUser's Focus cartoon reminds me of part of the Joel Test.

    "8. Do programmers have quiet working conditions?
    There are extensively documented productivity gains provided by giving knowledge workers space, quiet, and privacy. The classic software management book Peopleware documents these productivity benefits extensively."

    Like MonkeyUser Joel discusses the challenge of getting into "the zone" which is hard if the need to generate a buzz and excitement in the workplace turns it into something approximating a youth club. While no one wants to end up that the other end of the spectrum, a workplace resembling a North Korean labour camp, there is a balance to be made.

    I once worked somewhere where, in an office of three people, about ten words were said (not just to me but to anyone) all day. I had focus, but also a sense of alienation so there is a balance to be struck if we are to maintain a level of productivity and not just be in hock to a need to keep team members entertained in what can seem a like a race to the bottom of productivity in order to fill developer chairs.

    My concern is that the attention is focused on HR-driven goodies. In my experience many developers respond more stongly to a solid work process where they get well-qualified work, which challenges them, in realistic timescales, and which will actually see the light of day in Production.

    While beer is more easily obtained that a quality process and clear road map, free beer on a Friday can only go so far to distract from a chaotic stream of context-switching from fire to fire.

    Book review: Pro ASP.NET Core MVC 2 by Adam Freeman

    BOOM!

    It's one of those books you can weigh on the bathroom scales rather than the kitchen ones. However that's all good....

    I've been mostly a back end dev over the last 6 years so have neglected ASP.net MVC. I'm OK with that in one way as it means that during the last three roles I have been focusing on what the company needs me to do to achieve their goals. As they didn't need MVC then me filling my brain with it is a bit of a distraction (when it could be filled with Knockout, say).

    However now that it's on .net core 2 I thought it was time to dip back in and have worked through most of this tome from Adam Freeman

    Firstly the size. I once bought a similarly-long book from Apress on C# (4, I think) by Troelsen on my Kindle and it was unmanageable (it was one of the older Kindles with a quarter-second delay to flick back a page).

    I was worried that holding a 1000-page book would also be hard. Well, it is a little tiring if you hold it up, so I tended to read it on the desk.

    The reason it is so large is that the code excerpts are almost all complete. What I mean is that they aren't full of ellipses asking you to refer back 2 pages for this part and 5 pages for the next part. This approach, that some other publishers take, saves weight, trees and forearms, I get that, but it breaks the flow (like the quarter-page delay on the page-back on my Kindle did) and just like in work so in reading technical matter focus (or flow if you like) is important.

    Whether he covers everything he should or not I can't tell as it's not my home field. However I can see the differences from the last version I touched, which was ASP.net MVC 4. It's a lot nicer to work with.

    I was also able to get a moderately involved starter project going so it met my needs.

    Recommended

    Working in a dev bubble(1): complexity

    Manchester is a dev skills bubble just now, no doubt about that.

    In the 7 years I have been here the top level advertised salary for a senior C# dev has risen from about £35-40k to about £60-£70k. Obviously some of those are speculative CV-harvesting operations, but from discussions I have had there are clearly a lot more jobs around, and better paid ones at that.

    Especially given that inflation alone would have seen that 2011 range rise to £41k to £47k.

    While that is great news for me as a seller of technical skills, it also presents some challenges as someone who leads other people to deliver the work that I have to. How so? Well, when I was recruiting developers in my Manchester role in 2012 we could be a bit choosy about how often a candidate switched jobs. We could worry that she or he may not stay long enough to get to speed and earn their salary. In fact in my second role we anticipated that, given the complexity of the system, if someone was fully effective in six months that was a good result.

    The problem is that in today's hot market good devs can be tempted away after not so long in a role. They can afford to be fickle. Because other employers are so (let's not say "desparate") "keen" to fill vacancies, people can leave after a relatively short time in their current role. If we want to fill empty seats we cannot be as choosy as once we (or at least my then-manager and I) were.

    The flip-side is that as a purchaser of skills I cannot afford to let someone take 12 months to earn their salary, as if they leave within two years (which seems about a 20%-33% chance at the moment) then we will not have covered our costs.

    The only way to mitigate this is to structure systems more simply.

    Sure, we've always wanted to do that haven't we?

    Yes, but to add to the message pressing for this: ease of maintenance, ability to rotate staff through, easier debugging, we now need to add the economic reality that if we can't get staff up to speed quickly then we're going to risk losing money on employing them.

    So more than ever the classic messages of decoupled architectures and TDD have a key role to play to onboarding team members and getting them productive.

    In my first .net role in 2006 it was six weeks before I worked on code that made it into the Production release. This week a new starter (with 2.5 years in the industry) had written code (and unit tests) which was in Prod by the end of his second day. The level of test coverage gives confidence that the change is safe and the new starter can start to earn their salary straight away.

    While succession planning helps to mitigate the impact of the departures and compnay culture should help to prevent, structuring the code so that devs are productive from the get-go is crucial.

    Git for beginners

    I've added some info to our internal Confluence wiki for new devs who don't have a git background. That group included me a year ago so I recall how much of a mind shift it is from a centralised VCS.

    Here are my notes/links for future ref (as much by me as anyone else).

    SSH:

    Once you have set up your account on BitBucket, follow the steps on Bitbicket (dropping down "Set up SSH for Windows") to step up SSH. I would stop at step 6 as at that point you can clone the repos.

    Configure git

    Mandatory. In git bash:

    • git config --global user.email [your email]
    • git config --global user.name [your name]

    Optional:

    • Set your favourite mergetool for merging: git config --global merge.tool "winmerge". You can see a list of which suitable tools you currently have installed by running: git mergetool --tool-help

    • Create an alias for a customised version of git log (usage : git lg): git config --global alias.lg "log --pretty=' %Cred%h%Creset | %C(yellow)%d%Creset %s %Cgreen(%cr)%Creset %Ccyan%Creset' --graph"

    • Improves git diff tool: git config --global diff.algorithm histogram

    • Make merge issues show the base file before conflicts as well as the actual conflicts: git config --global merge.conflictStyle diff3

    • Configure Git-Bash (if you intend on using it) to point to your local repository file path by default each time it loads up

    • Right click on the Git Bash.exe shortcut and click Properties.

    • In the Shortcut Tab, enter your local Git repository path in the "Start In" textbox. For example, "C:\Git" without the quotation marks

    • If present, remove the "--cd-to-home" text from the Target path as this effectively overrides the 'Start In' path

    Learning git

    There are two tools to help you get to grips with branching (with live testbeds):

    Angular 2 demo at the Angular Meetup on Tuesday

    A busy week. Starting a new job on Tuesday morning and then in the evening I'm demoing where I have got up to with my Angular 2 (now 4) side project at the Angular Workshop at MadLab.

    To fuel my learning I've mostly been using the Angular 2 book from Fullstack.io. This is a subscription-model book which is a little eccentric, but it is updated regularly. Given that ng moves so quickly you can waste a lot of time watching videos that are even a few months old.

    It's exciting that tech moves so quickly now as productivity is booming but it's often hard to make the time to keep up on what is not your crusting-earning tech stack.

    New year new tech

    Last year I was learning Node and have built up a bit of a backend for a side project. Now it's onto the front end, in Angular 2.

    YADB : yawn?

    Yet another development blog? Yawn.

    Well, we're now recruiting again and I like to see evidence of work/thoughts/commitment from candidates. I think I should be able to bear up to the same examination so here goes.

    I accept that not everyone can find the time when balancing work, commuting and domestic commitments to work on a side project sufficient to impress an prospective employer but I think that there is a lower bar to entry for a blog.

    And as it is we've had a majorly hectic year with a conversion of a Silverlight app to a shiny new HTML5/Knockout SPA so I recognise that the time and mental clarity to come home and write ace code after dinner may not always be there if you are putting in the hours.

    Anyway let's see how this goes.