<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The New and Shiny]]></title><description><![CDATA[Software Development and DevOps Stories]]></description><link>https://thenewandshiny.com/</link><generator>Ghost 5.74</generator><lastBuildDate>Wed, 28 Jan 2026 09:24:23 GMT</lastBuildDate><atom:link href="https://thenewandshiny.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Small Tips for Docker with Powershell]]></title><description><![CDATA[Small helper functions to work with docker more efficiently.]]></description><link>https://thenewandshiny.com/small-tips-for-docker/</link><guid isPermaLink="false">65604a5f1723fe00014aa17d</guid><category><![CDATA[powershell]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[devops]]></category><category><![CDATA[tooling]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Fri, 24 Jan 2020 13:04:33 GMT</pubDate><content:encoded><![CDATA[<p>This entry expands on a special aspect of my powershell setup on multiple plattforms.<br>
I wrote about this <a href="https://thenewandshiny.com/multiplatform-setup-for-powershell/">a while back</a>.<br>
<img src="https://images.unsplash.com/photo-1494412651409-8963ce7935a7?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Shot with @expeditionxdrone" loading="lazy"><br>
<small>Photo by <a href="https://unsplash.com/@chuttersnap?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">chuttersnap</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<h2 id="tldr">TLDR</h2>
<p>When writing a command over and over again create an alias witch points to a nice function. <a href="https://github.com/sqeezy/PsDevScripts/blob/master/DockerUtils.psm1?ref=thenewandshiny.com">E.g.</a></p>
<h2 id="utility-functions">Utility Functions</h2>
<p>While using docker for the last years I learned a lot. Now that I&apos;m giving that rough knowledge to my colleagues and while doing that I noticed some things that might be helpful for the general public concerned with docker and/or powershell.</p>
<p>When getting into docker a bit deeper then just starting a single container an destroying that when your done, I noticed that some functions you expect to have are missing from the docker(-machine) CLI.<br>
Following are the examples that I as of now deemed worthy of being turned into a function that is easily callable via an alias:</p>
<h3 id="remove-every-running-container">Remove Every Running Container</h3>
<p>When testing docker you will leave containers running. This is especially true when using windows as the host plattform and due to the fact that auto remove via <a href="https://docs.docker.com/engine/reference/run/?ref=thenewandshiny.com#clean-up---rm">--rm</a> does not work properly in all shells.</p>
<p>For that usecase I wrote the following little function:</p>
<pre><code class="language-powershell">function Remove-AllDockerContainers {
  docker ps -aq `
      | ForEach-Object { docker rm -vf $_ }
}
</code></pre>
<p>Executing this will pipe the id of every existing(running or not) container into the remove command with options for removal of volumes being added.</p>
<p>This later evolved into the following:</p>
<pre><code class="language-powershell">function Remove-AllDockerContainers {
  docker ps -aq `
      | ForEach-Object { docker stop -t 1 $_ } `
      | ForEach-Object { docker rm -vf $_ }
}
</code></pre>
<p>The stop with little timeout was added due to some special cases where the container process wasn`t willing to die easily.</p>
<h3 id="remove-untaged-images">Remove untaged Images</h3>
<pre><code class="language-powershell">function Remove-AllUntagedDockerImages {
  docker images `
      | ConvertFrom-String `
      | Where-Object {$_.P2 -eq &quot;&lt;none&gt;&quot;} `
      | ForEach-Object { docker rmi $_.P3 }
}
</code></pre>
<p>This is pretty self explanatory. Search for all the images that have no (<none>) tag and remove them.</none></p>
<h3 id="on-board-functions">On board functions</h3>
<p>To be fair the docker CLI has tools to do similar things.<br>
<code>docker system prunf (-f)</code> might be the most notable. It removes basically everything that&apos;s not directly needed or just everything that doesn&apos;t stop any containers from running. For my purposes this is still the rough and for that reason I keep the functions above in my powershell(and pwsh) profile.</p>
<h3 id="get-the-content-of-any-registry">Get the content of any registry</h3>
<pre><code class="language-powershell">function Get-DockerRegistryContent {
  [CmdletBinding()]
  Param(
  $Filter=&quot;.*&quot;,
  $RegistryEndpoint
)
  $allRepos = (Invoke-RestMethod -Uri &quot;http://$RegistryEndpoint/v2/_catalog&quot; -Method Get ).repositories 
  $reposMatchingFilter = $allRepos -Match $Filter
  $reposMatchingFilter | ForEach-Object {(Invoke-RestMethod -Uri &quot;http://$RegistryEndpoint/v2/$_/tags/list&quot; -Method Get )}
}
</code></pre>
<p>Speaking of on board functions. This one for some reason is nowhere to be found in the docker CLI. It uses the rest api of any given docker registry to give you and overview of its contents. I`m sure that I miss the bigger picture here but for the last couple of years this baffled me.</p>
<p><strong>As always I`m happy about feedback and will answer questions if there are any. The ways of contact are on this site.</strong></p>
<p></p>]]></content:encoded></item><item><title><![CDATA[Particle Swarm Optimizer - The Hive Mind]]></title><description><![CDATA[Swarm related logic and the first actually working optimizer.]]></description><link>https://thenewandshiny.com/particle-swarm-optimizer-the-hive-mind/</link><guid isPermaLink="false">65604a5f1723fe00014aa17b</guid><category><![CDATA[F#]]></category><category><![CDATA[Algorithm]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Thu, 13 Dec 2018 05:32:23 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1462040015891-7c792246b10e?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1462040015891-7c792246b10e?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Particle Swarm Optimizer - The Hive Mind"><p>This article is part of a series:</p>
<ul>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-an-introduction/">An Introduction</a></li>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-the-social-structure/">The Social Structure</a></li>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-the-hive-mind">The Hive Mind</a></li>
<li>...</li>
</ul>
<p>This article concludes the initial discussion about the particle swarm optimizer. In this third part I will talk about the swarm structure and the easiest implementation of the optimizer itself.</p>
<h2 id="theswarmnothingspecialhere">The Swarm - Nothing Special Here</h2>
<p>To recapitulate here is the definition of the swarm:</p>
<pre><code>type Swarm = {
  GlobalBest : Solution;
  Particles : Particle list
}
</code></pre>
<p>There is really not much exiting stuff here and therefore the logic that is used in relation to it, isn&apos;t to suprising either. It boils down to creation and upate functions as well as the logic to determine the size of the swarm, which I took from a well known <a href="http://hal.archives-ouvertes.fr/file/index/docid/764996/filename/SPSO_descriptions.pdf?ref=thenewandshiny.com">Paper</a>]. In the end this looks something like the following:</p>
<pre><code>module Swarm =

  let private typicalSwarmSize dimension = 
              10 + 2 * (dimension |&gt; float |&gt; Math.Sqrt |&gt; int)
  let private valueFromSolution solution =
    let (_ , value) = solution
    value
    
  let private bestParticle particles =  
    particles 
      |&gt; Seq.minBy (fun p -&gt; p.LocalBest |&gt; valueFromSolution)
                                          
  let create problem =
    let particles = [1 .. (typicalSwarmSize problem.Dimension)] 
                      |&gt; List.map (fun _ -&gt; Particle.create problem)
    let initialBestParticle = bestParticle particles
    {
      GlobalBest = initialBestParticle.LocalBest;
      Particles = particles
    }

  let update (swarm:Swarm) (proposal : Solution) : Swarm=
    let ( _, oldBest) = swarm.GlobalBest
    let ( _, proposedBest ) = proposal

    match proposedBest with
    | better when better &lt; oldBest -&gt; 
            { swarm with GlobalBest = proposal }
    | worse  when worse &gt;= oldBest -&gt; swarm
    | _                            -&gt; swarm
</code></pre>
<p>The actually interesting logic follows now. Drum roll...</p>
<h2 id="sequentialoptimizerstepbystep">Sequential Optimizer - Step by Step</h2>
<p>Lets recapitulate what an optimizer implementation has to accomplish. The modules regarding particles and swarms care only about exactly those things. So it&apos;s the optimizers task to put things together. This can be summerized in the type definition <code>type Optimizer = OptimizationProblem -&gt; Solution</code>. We take a problem and get a solution. Everything else has to be done inside of that logic. For now this just seems the most convenient. Following I list a condensed version of the sequential optimizer to illustrate the core functionality. In the actual current implementaion there is some logging added.</p>
<pre><code>let solve problem : Solution =

  let iterParticleOnProblem = problem |&gt; Particle.itterate
  
  let updateSingleParticle particle {GlobalBest = currentBest} = 
    iterParticleOnProblem currentBest particle  
  
  let updateSingleAndApplyToSwarm swarm particle =
    let updatedSingleParticle = particle |&gt; updateSingleParticle &lt;| swarm
    let updatedSwarm = Swarm.update swarm updatedSingleParticle.LocalBest
    { updatedSwarm with Particles = (updatedSingleParticle::swarm.Particles) }
  
  let singleIterationOverWholeSwarm ({GlobalBest = globalBest; Particles = particles}) : Swarm = 
    Seq.fold updateSingleAndApplyToSwarm {GlobalBest = globalBest;Particles = List.empty} particles
    
  let itterationWithIndex swarm _ = 
    singleIterationOverWholeSwarm swarm
    
  let swarm = Swarm.create problem
  let swarmAfterMaxIterations = 
    Seq.fold itterationWithIndex swarm [1 .. 1000]

  swarmAfterMaxIterations.GlobalBest
</code></pre>
<p>Here we basicly have a number of functions derived from functions in the particle and swarm module taylored to the problem given. This is done by partial application, in my oppinion one of the biggest strengths of functional languages. For example take <code>let iterParticleOnProblem = problem |&gt; Particle.itterate</code>. This takes a function with the defintion &apos;OptimizationProblem -&gt; Solution -&gt; Particle -&gt; Particle&apos; and resolves the <em>OptimizationProblem</em> parameter resulting in &apos;Solution -&gt; Particle -&gt; Particle&apos;.<br>
This is afterwards done for...</p>
<ul>
<li>applying the solution to a single particle</li>
<li>updating a particle and...</li>
<li>immediately updating the swarm using that particle</li>
<li>updating the complete swarm</li>
</ul>
<p>Now we can get concrete in the sense that we create an actual swarm state and itterate over it a thousand times. After that we just return the best found Solution.<br>
This is really a basic implementaion and we will expand on that at some later point.</p>
<h2 id="notassocialassuspectedlookingahead">Not as social as suspected - looking ahead</h2>
<p>When I red the name Particle Swarm Optimizer, pictures of a hive mind formed in my head, but the particles in this algorithm merely use one common state. At least in this simple implementation. My personal goal for this series is to implement, explain and compare more complicated structures for this swarm structure. The topic of Particle Swarm Optimization is far from being a solved problem. So there should be interesting things ahead.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Particle Swarm Optimizer - The Social Structure]]></title><description><![CDATA[Insight into the relationship between Particles inside of a Swarm.]]></description><link>https://thenewandshiny.com/particle-swarm-optimizer-the-social-structure/</link><guid isPermaLink="false">65604a5f1723fe00014aa17a</guid><category><![CDATA[F#]]></category><category><![CDATA[Algorithm]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Tue, 06 Nov 2018 19:31:44 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1520792611267-82d5692af267?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=43b4af8ca6fdd27c4be237f60feb57df" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://images.unsplash.com/photo-1520792611267-82d5692af267?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=43b4af8ca6fdd27c4be237f60feb57df" alt="Particle Swarm Optimizer - The Social Structure"><p><img src="https://thenewandshiny.com/content/images/2018/11/bird-swarm.jpg" alt="Particle Swarm Optimizer - The Social Structure" loading="lazy"><br>
Today I want to write about the gravity like relationship between particles inside of a particle swarm.</p>
<p>This article is part of a series:</p>
<ul>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-an-introduction/">An Introduction</a></li>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-the-social-structure/">The Social Structure</a></li>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-the-hive-mind">The Hive Mind</a></li>
<li>...</li>
</ul>
<h2 id="themodel">The Model</h2>
<p>Going back to last article, we defined a model that looked something like the following:</p>
<pre><code>type Particle = {
  Position  : ParameterSet
  Velocity  : float array
  LocalBest : Solution
}

type Swarm = {
  GlobalBest : Solution;
  Particles  : Particle list
}

// The remaining parts of the optimization problem will be discused later
type OptimizationProblem = {
  ...
  Func : TargetFunction
  ...
  
}
</code></pre>
<p>On the one hand there is the single particle with its knowledge about his own known best state. One the other hand there is the, to this point in the execution, best known state of the whole swarm. In a simple implementations the particles don&apos;t really talk to each other but just share there one global best state.</p>
<h2 id="themovement">The Movement</h2>
<p>The most simple logic for moving a particle in context of a swarm is:</p>
<ol>
<li>Update the velocity of the particle using global and local best known solution as well as a random component</li>
<li>Update the position of the particle using the updated velocity</li>
<li>Update the local best known solution when appropriate</li>
<li>Update the global best known solution when appropriate</li>
</ol>
<p>Looking at the model above this results in this function:</p>
<pre><code>// below is the function signature aligned with parameter list 

//            OptimizationProblem -&gt; Solution -&gt;          Particle  -&gt; Particle
let itterate  (problem,              (globalBestPos, _),  particle) =

  let weightLocal = randomBetweenZeroAndOne()
  let weightGlobal = randomBetweenZeroAndOne()
  
  let (localBestPos, localBestValue) = particle.LocalBest
  let currentPosition = particle.Position

  let updatedVelocity = particle.Velocity 
                        ++ (weightGlobal .* (globalBestPos -- currentPosition ))
                        ++ (weightLocal  .* (localBestPos  -- currentPosition ))

  let updatedPosition = currentPosition  
                        ++ updatedVelocity

  let newCurrentValue = problem.Func updatedPosition
  let updatedLocalBest =
    if  newCurrentValue &lt; localBestValue then
      (updatedPosition, newCurrentValue)
    else
      particle.LocalBest
      
  {
    Position  = updatedPosition;
    LocalBest = updatedLocalBest;
    Velocity  = updatedVelocity
  }
</code></pre>
<p>OK, it&apos;s not quite so simple. First things first, there are three infix operators used in this code. Namely <code>++</code>,<code>--</code> and <code>.*</code>. These are used to have simple array operations on the positions and velocities of particles. The code follows:</p>
<pre><code>let (--) v1 v2 = Array.map2 (-) v1 v2
let (++) v1 v2 = Array.map2 (+) v1 v2
let (.*) scalar v = v |&gt; Array.map (fun vCoordinate -&gt; scalar * vCoordinate)
</code></pre>
<p>These operators make it possible to use a shorter syntax for updating velocity and position.<br>
In this simple implementation the weight put onto the local and the global gravity factor is equal. Depending on the problem at hand it might be smart to use different weight distributions. This will be discussed later in this article series.<br>
Using these random aspects of the algorithm we  update the velocity of the particle and using that the position. Finally we evaluate the <em>Fitnesse</em> of the new position and update the particles best known state. Finally we aggregate all this to form a new particle.</p>
<p>We now covered three of the four steps I was talking about in the listing above. The fourth step <em>Updating the globally best known solution</em> is not part of the logic concerning a single particle of the swarm and thusly will be discussed in the next article.</p>
<h2 id="upnextthehivemind">Up next - The Hive Mind</h2>
<p>In the next article we will discuss how this global best known state is managed. At the end of that article we will have the first working implementation of the algorithm.</p>
<p>As always feel free to ask me about specifics and please point me to logic gaps or badly explained parts of my writing. Till next time.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Particle Swarm Optimizer - An Introduction]]></title><description><![CDATA[Short Introduction in the workings of the PSO and modelling it in F#.]]></description><link>https://thenewandshiny.com/particle-swarm-optimizer-an-introduction/</link><guid isPermaLink="false">65604a5f1723fe00014aa179</guid><category><![CDATA[F#]]></category><category><![CDATA[Algorithm]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Fri, 02 Nov 2018 08:48:07 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://thenewandshiny.com/content/images/2018/11/dl.maxpixel.freegreatpicture.com.jpeg" alt="dl.maxpixel.freegreatpicture.com" loading="lazy"></p>
<p>This article is part of a series:</p>
<ul>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-an-introduction/">An Introduction</a></li>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-the-social-structure/">The Social Structure</a></li>
<li><a href="https://thenewandshiny.com/particle-swarm-optimizer-the-hive-mind">The Hive Mind</a></li>
<li>...</li>
</ul>
<h2 id="introduction">Introduction</h2>
<p>Since I started programming and especially after I starting to work as a developer I was eager to work on actual problems while trying out new technologies. One recurring algorithm I tend to implement is the <a href="https://en.wikipedia.org/wiki/Particle_swarm_optimization?ref=thenewandshiny.com">Particle Swarm Optimizer</a> (from now on just called PSO).</p>
<h2 id="many-moving-partsthe-algorithm">Many moving Parts - the Algorithm</h2>
<p>The main concept behind this optimization algorithm is a number of particles which are let loose on a numerical problem. Friendly as those particles are with each other they attract each other and share a common knowledge about the best solutions to the problem they are trying to solve. With the information from there own search and from the rumors heard from the other particles in the swarm, these particles move through the problem space.<br>
This concept was appealing because I found that it is implemented pretty fast, tested pretty easily and that it has potential to be parallelized or tinkered with some way else.</p>
<p>I will go over the details of the algorithm again, but for now everybody interested in the inner workings should go to the <a href="https://en.wikipedia.org/wiki/Particle_swarm_optimization?ref=thenewandshiny.com">Wikipedia page</a> and look it up.</p>
<p>One very interesting aspect of this algorithm is that it&apos;s not reliant on working with a well behaved function as it doesn&apos;t work with e.g. the gradient of the function thats looked at.<br>
The other advantage, for me at least, is that the results and the way to get them is easily visualized.</p>
<h2 id="fthe-current-language-of-choice">F# - The current language of choice</h2>
<p>My main focus in the last years, language wise, was in C#. For some time now though, I tried to get a better understanding of functional programming concepts in contrast to the OOP one gets used to, using C#. After looking into things like elixir and javascript I settled for F# as the most convenient choice.</p>
<h2 id="first-things-firstthe-model">First things first - The Model</h2>
<p>When you try to learn about F#, you&apos;re going the be told that it is very good for following a DDD (Domain Driven Design) way of building your program. So let&apos;s do that:</p>
<pre><code>type Optimizer = OptimizationProblem -&gt; Solution
</code></pre>
<p>This seems obvious and it&apos;s not special to this algorithm. It&apos;s just the general definition of an optimization machine like thingy. Lets work our way down from this beginning.</p>
<pre><code>type OptimizationProblem = {
  Function : TargetFunction
  SearchSpace : float * float
  Dimension : int
}
</code></pre>
<p>Still nothing that actually touches on the special nature of the PSO. An optimization problem is made up out of a actual function, which has a defined dimension and whose input parameters are typically in an interval of values(search space).</p>
<pre><code>type Solution = ParameterSet * Fitnesse

type ParameterSet = float array

type Fitnesse = float

</code></pre>
<p>This part is just the underlying parts of the model we are working with here. A solution is a certain parameter set and its resulting function value. A parameter set is a set of numbers and the fitnesse of a solution is a single number. Nothing really to see here. Now we come to the interesting part.</p>
<pre><code>type Particle = {
  Position : ParameterSet
  Velocity : float array
  LocalBest : Solution
}
</code></pre>
<p>Every particle consists of a position it currently has in the search space. This position is bound to change over the course of an optimization.<br>
To move any particle has to have a velocity associated to it. This velocity is changing because the particles talk to each other and remember their own best known state.</p>
<pre><code>type Swarm = {
  GlobalBest : Solution;
  Particles : Particle list
}
</code></pre>
<p>The swarm itself is just consisting of the current population of particles and there shared and therefor globally best solution to the optimization problem at hand.</p>
<h2 id="wrap-up">Wrap Up</h2>
<p>We now looked into simple model which we can use to implement a working Particle Swarm Optimizer. In the next article we will look into the behavior of the particles and the gravity like attraction they have to each other.</p>
]]></content:encoded></item><item><title><![CDATA[Setting up GitVersioning for your Project]]></title><description><![CDATA[10 minute guide on how to setup versioning for your dotnet project]]></description><link>https://thenewandshiny.com/setting-up-gitversioning-for-your-project/</link><guid isPermaLink="false">65604a5f1723fe00014aa177</guid><category><![CDATA[dotnet]]></category><category><![CDATA[git]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Sat, 27 Oct 2018 12:59:10 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Versioning your projects is hard, but setting up basic versioning functionality for your projects is not. This is what I learned from a very informative <a href="https://blogs.msdn.microsoft.com/dotnet/2018/10/15/guidance-for-library-authors/?ref=thenewandshiny.com">Video</a><br>
from Immo Landwerth which I saw some days ago. So I sat down, took an old project of mine and tried it out. Turns out it&apos;s true.</p>
<h2 id="gitversioningentersthearena">GitVersioning enters the Arena</h2>
<p>The project I used for that is an implementation of <a href="https://en.wikipedia.org/wiki/Fibonacci_heap?ref=thenewandshiny.com">Fibonacci Heap</a> I wrote when I was in school. It&apos;s a basic example of minimalistic library with no dependencies. You can find the code on <a href="https://github.com/sqeezy/FibonacciHeap?ref=thenewandshiny.com">Github</a>.</p>
<p>Following the documentation on Github I just installed the command line tool via the dotnet-cli <code>dotnet tool install -g nbgv</code>. After that you just have to use the install command <code>nbgv install</code> in your project or solution root and it will add the files <em>Directory.Build.props</em> and <em>version.json</em>.<br>
The former gets picked up from MSBuild and injects Git.Versioning in every project.<br>
<em>version.json</em> is where you define the major and minor of your version number and additional tags like <em>beta</em> or <em>preview</em> you want to use.</p>
<p>At this point the versioning part of things is done. Afterwards I also configured the creation of the nuget package for my library. This will be the topic of a future article.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Multiplatform Setup for Powershell]]></title><description><![CDATA[Easy start into using powershell on any system with some nice modifications]]></description><link>https://thenewandshiny.com/multiplatform-setup-for-powershell/</link><guid isPermaLink="false">65604a5f1723fe00014aa176</guid><category><![CDATA[powershell]]></category><category><![CDATA[tooling]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Wed, 24 Oct 2018 05:24:45 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://thenewandshiny.com/content/images/2018/10/28485281273_d2ef89e5d2_b.jpg" alt="28485281273_d2ef89e5d2_b" loading="lazy"></p>
<p>This article is part of a series about powershell in general and the multiplatform nature of Powershell-Core specifically.</p>
<ol>
<li><a href="../powershell-everywhere/">Powershell Everywhere</a></li>
<li><a href=".">Multiplatform Setup for Powershell</a></li>
</ol>
<h2 id="tldr">TLDR</h2>
<p>Got to <a href="https://github.com/sqeezy/PsDevScripts?ref=thenewandshiny.com">PsDevScripts on Github</a> to find an easy entry to using powershell with whichever OS.</p>
<h2 id="my-intentions">My Intentions</h2>
<p>In the last article in this series I wrote mainly about the possibility of using powershell on multiple platforms. In no way I want to talk people into abandoning there tried and true setup and diving heads deep into powershell. I just wanted to talk about the advantages I see.</p>
<h2 id="how-to-make-the-entry-easy">How to make the Entry easy</h2>
<p>Many comments I got for the last article read something like <em>I wanted to try pwsh for a long time but didn&apos;t find a good book(other resource) to start with</em>. As for resources, I named some at the end of the last article.<br>
In my opinion the easiest way of getting into a shell is just using it though. That&apos;s why I cleaned up my personal suite of setup scripts and helper modules I use on every system: <a href="https://github.com/sqeezy/PsDevScripts?ref=thenewandshiny.com">PsDevScripts on Github</a><br>
I will update this repository in the future as I myself go along with learning about powershell.</p>
<h2 id="parts-of-my-setup">Parts of my Setup</h2>
<p>Following is a little more detailed explanation to every part of this suite.</p>
<h3 id="installer-scripts">Installer Scripts</h3>
<p>While cleaning up the <a href="https://github.com/sqeezy/PsDevScripts?ref=thenewandshiny.com">repo</a> I noticed that I would like to just start a single script and be done with it. Everybody that wants to try out this basic setup can just run <code>InstallDefaultSetup.ps1</code>.</p>
<h3 id="modules">Modules</h3>
<p>For now I included the two modules that I always install on a new machine:</p>
<table>
<thead>
<tr>
<th style="text-align:left">Module</th>
<th style="text-align:left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left">posh-git</td>
<td style="text-align:left">Autocomplete for git commands</td>
</tr>
<tr>
<td style="text-align:left">posh-docker</td>
<td style="text-align:left">Autocomplete for docker(-compose) commands</td>
</tr>
<tr>
<td style="text-align:left">PowershellConfig.psm1</td>
<td style="text-align:left">Functions to interact with your pwsh-config</td>
</tr>
</tbody>
</table>
<h3 id="default-profile">Default Profile</h3>
<pre><code class="language-Powershell"># import all modules in this folder
Get-ChildItem -Filter &quot;*.psm1&quot; -File &quot;~\Github\PsDevScripts&quot; | ForEach-Object {Import-Module $_.FullName}

# activate modules you want always to be active
Import-Module -Force posh-git, posh-docker

# configure posh-git
$global:GitPromptSettings.DefaultPromptAbbreviateHomeDirectory = $true
$global:GitPromptSettings.BeforeText = &apos;[&apos;
$global:GitPromptSettings.AfterText  = &apos;] &apos;

# additional functions that are machine specific
</code></pre>
<p>All this does it importing the installed modules into your session. I use the the term <em>installed</em> loosely here, as I also import modules included in my repository.<br>
At the end there is a very basic configuration for <em>posh-git</em> just so your command prompt is a little bit cleaner when navigating through git repos.</p>
<h2 id="further-questions">Further Questions?</h2>
<p>With the above setup you can get started with powershell and the <em>git/docker</em> based workflow I tend to follow.</p>
<p>Now I would be very much interested in questions that come to mind for someone reading about this setup. I would be happy to answer them in a future article or in a conversation on twitter, reddit or email.</p>
<p>Till next time.</p>
]]></content:encoded></item><item><title><![CDATA[Powershell Everywhere]]></title><description><![CDATA[Use the same shell on every system. This is my recommendation on which to choose]]></description><link>https://thenewandshiny.com/powershell-everywhere/</link><guid isPermaLink="false">65604a5f1723fe00014aa175</guid><category><![CDATA[infrastructure]]></category><category><![CDATA[devops]]></category><category><![CDATA[powershell]]></category><category><![CDATA[tooling]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Sun, 21 Oct 2018 19:38:10 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://thenewandshiny.com/content/images/2018/10/keep_calm_pwsh.png" alt="keep_calm_pwsh" loading="lazy"></p>
<p>This article is part of a series about powershell in general and the multiplatform nature of Powershell-Core specifically.</p>
<ol>
<li><a href=".">Powershell Everywhere</a></li>
<li><a href="../multiplatform-setup-for-powershell">Multiplatform Setup for Powershell</a></li>
</ol>
<p>In my day job I work for a company focused on Windows/.NET/WPF. While we worked on cleaning up our build processes and standardize our tools over all our projects, I had the chance to play arround with multiple script languages. One that is following me around for a while now is Powershell.</p>
<h2 id="technical-reasons-to-look-into-powershell">Technical Reasons to look into Powershell</h2>
<p>Our &apos;legacy&apos; apps mostly used <em>rake</em> or even good old <em>batch files.</em> As we mainly code in C#, I proposed <a href="https://cakebuild.net/?ref=thenewandshiny.com">Cake</a> as our build tool. This naturally forced us to use Powershell, because the standard way of using Cake under Windows is a Powershell bootstraper. In a later part of the refactoring of our toolchain, I got a chance to write a bigger test suite in Powershell. This worked well, as Powershell is the standard windows shell today, and therefore greatly supported.<br>
On another occasion we had to provision some VMs as build agents. Here another tool came in handy:<br>
<a href="chocolatey.org">Chocolatey</a> - This package manager written in Powershell finally made something possible that was one of my biggest painpoints of working in Windows, the long setup time when working with a fresh machine. This gets even easier when using <a href="https://boxstarter.org/?ref=thenewandshiny.com">Boxstarter</a> which is build on top of Powershell and<br>
Chocolatey. At this point I was locked into Powershell while working on windows.</p>
<h2 id="stylistic-reasons-to-use">Stylistic Reasons to use</h2>
<p>Things I instantly liked about Powershell is its standard naming convention. <code>&lt;Verb&gt;-(&lt;SomeAddition&gt;)&lt;Noun&gt;</code>. It shows you what the normal return of Powershell functions is: An object. While I was confused for a while on how to get certain property values out of objects and when some path would be implicitly converted to a string or not, that was mostly due to me not reading docs.<br>
At this point the <a href="https://docs.microsoft.com/?ref=thenewandshiny.com">New Microsoft Documentation</a> has to be highlighted. The guys in Redmont took most complaints people had about MSDN and created an actual useful documentation with great focus on fast tutorials and working examples.</p>
<h2 id="shells-on-other-plattforms">Shells on other Plattforms</h2>
<p>Though I mostly work on windows, I use docker a lot and all my server apps these days run under linux. Back when I went to university I played around a lot with linux and always liked aspects of it. What I never liked was the missing consistency between the tools you have to use every day. While the linux shells work well communicating over plain ASCI text, the in-/outputformats are not consistent between tools. This means the overhead of connection tools can be huge.<br>
I&apos;m not a linux expert and certainly not a <em>bash</em> or <em>sh</em> one. Personally I used <em>zsh</em> when working under linux for years. This is changing because of microsofts effort to implement a cross platform toolchain.</p>
<h2 id="powershell-core-enters-the-stage">Powershell Core enters the Stage</h2>
<p>The first alpha release of <a href="https://github.com/PowerShell/PowerShell?ref=thenewandshiny.com">Powershell Core</a> went up mid year 2016. I shortly after that heard of the project and nearly fell of my chair. .Net on linux is one thing. The &apos;Windows-Shell&apos; under linux sounded crazy at first. Since then 2 years have past and <em>dotnet core</em> is an actual thing and Powershell is not the &apos;Windows-Shell&apos; anymore. You can use <em>PWSH</em>, as the tool is actually called, under every mainstream linux distro.<br>
One other huge upset is performance. As <em>Powershell Core</em> is built on <em>dotnet core</em> there is a big performance uptick. In every other way powershell on linux is just like every other shell. In contrast to windows powershell the huge number of alias definitions for Cmdlets hash been removed though, as they would colide with native linux functions.</p>
<h2 id="one-shell-config-for-every-system">One shell config for every system</h2>
<p>The number one reason for me to switch to powershell was that I can reuse my shell configuration on windows. To this point I didn&apos;t find any module of my configuration that isn&apos;t ported to <em>Powershell Core</em> yet.<br>
The feeling to log into a linux machine and not noticing any change in my command prompt was kind of unreal. And I really look forward to being able to use the same shell config on every system.</p>
<h2 id="bottom-line">Bottom Line</h2>
<p>I would recommend to give powershell a chance, even if windows isn&apos;t your main OS. It&apos;s standardized Cmdlet syntax is very convenient for teams and the biggest downside, it being windows only, is finally history. The object orientated nature of the function returns is also very helpful to reason about the code written.</p>
<h3 id="resources">Resources</h3>
<p>Finally here is a short list of resources I use often when using Powershell:</p>
<ul>
<li><a href="https://docs.microsoft.com/powershell/?ref=thenewandshiny.com">Docs</a></li>
<li><a href="https://www.manning.com/books/windows-powershell-in-action-third-edition?ref=thenewandshiny.com">Powershell in Action - Book</a></li>
<li><a href="https://github.com/janikvonrotz/awesome-powershell?ref=thenewandshiny.com">Awesome Powershell</a></li>
</ul>
<p>Thanks for your time and feel free to contact me with any questions or feedback.</p>
]]></content:encoded></item><item><title><![CDATA[Using Portainer for System Monitoring]]></title><description><![CDATA[Exploring an easy to use Web Frontend for your Docker Host]]></description><link>https://thenewandshiny.com/using-portainer-for-system-monitoring/</link><guid isPermaLink="false">65604a5f1723fe00014aa174</guid><category><![CDATA[docker]]></category><category><![CDATA[infrastructure]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Sat, 20 Oct 2018 06:19:31 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="exploringaneasytousewebfrontendforyourdockerhost">Exploring an easy to use Web Frontend for your Docker Host</h2>
<p>While talking to a colleague about how he wanted to host his own server instance of a multiplayer game. The use case was a small LAN. We talked about hardware requirements but I couldn&apos;t help but think that actual hardware at home to host a game server is not the most ideal thing. That&apos;s why I tried to convince him to use some kind of VPS and docker for this.<br>
My colleague told me that he wanted to control his server and the contained apps via a web-panel, so I went on the quest of finding one.</p>
<p><img src="https://thenewandshiny.com/content/images/2018/10/dockerize_all_the_things-1.jpg" alt="dockerize_all_the_things-1" loading="lazy"></p>
<h2 id="portainerentersthestage">Portainer enters the stage</h2>
<p>Inside of 5 minutes it was clear that <a href="https://portainer.readthedocs.io/en/latest/deployment.html?ref=thenewandshiny.com#quick-start">Portainer</a> could be the web-panel side of this equation. So we went ahead and tried it out. As all tools want to make you believe that they are easy to use, we didn&apos;t take their word for it but followed the quickstart guide I linked above:</p>
<pre><code>$ docker volume create portainer_data
$ docker run -d -p 9000:9000 --name portainer --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
</code></pre>
<p>When you now visit port 9000 of your docker host you get a very clean and easy to understand web-panel. There is really not that much more to say.<br>
Id like to recommend everybody using a small VPS to host his own small apps uses this tool to have quick and concise access to the state of his containers and the host itself. For everybody who read <a href="../how-to-host-your-own-blog">my series about hosting your own blog</a> this is the state of my docker host running the automatic https empowered setup. Only this time in a wonderful table:</p>
<p><img src="https://thenewandshiny.com/content/images/2018/10/portainer_screen.png" alt="portainer_screen" loading="lazy"></p>
<p>My colleagues second point was that he wants to control the VPS itself via a frontend. Thankfully most VPS-Providers already have that functionality, I use <a href="https://www.digitalocean.com/?ref=thenewandshiny.com">DigitalOcean</a>, but I&apos;m sure there are comparable solutions.<br>
Using the web-panel of your provider to shutdown/restart your server is already pretty convenient, but shouldn&apos;t the server just shut down when nobody is playing?<br>
I will look into that question in a later article and will link it HERE when it&apos;s done.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[How to host your own (secure) blog...]]></title><description><![CDATA[Second part of  my series about hosting you own blog]]></description><link>https://thenewandshiny.com/how-to-host-your-own-secure-blog/</link><guid isPermaLink="false">65604a5f1723fe00014aa173</guid><category><![CDATA[infrastructure]]></category><category><![CDATA[devops]]></category><category><![CDATA[docker]]></category><category><![CDATA[security]]></category><category><![CDATA[powershell]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Wed, 17 Oct 2018 14:42:59 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="orthestoryofletsencryptratelimits">...or the story of Let&apos;s Encrypt rate limits</h2>
<p>This article is part of a series about setting up your own blog and hosting stuff in general:</p>
<ol>
<li><a href="http://antonherzog.com/how-to-host-your-own-blog/?ref=thenewandshiny.com">How to host your own Blog...</a></li>
<li><a href="http://antonherzog.com/how-to-host-your-own-secure-blog/?ref=thenewandshiny.com">How to host your own (secure) blog...</a></li>
</ol>
<p><img src="https://images.unsplash.com/photo-1510511459019-5dda7724fd87?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=9ec064423ff0aa7f325c68aea02ad784" alt="black and gray laptop computer turned on" loading="lazy"><br>
<small>Photo by <a href="https://unsplash.com/@markusspiske?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Markus Spiske</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></small></p>
<p>So, after the <a href="https://antonherzog.com/how-to-host-your-own-blog?ref=thenewandshiny.com">setup of my first own blog</a>, I was left with the dilemma of me being eager to start writing up something about my current adventures in writing code and my possible shame of hosting a blog without https support.</p>
<p>So I started looking around for a simple but secure solution. In the end it seemed obvious to use some kind of decoupled <em>Let&#x2019;s Encrypt</em> container in combination with or with an integrated reverse proxy. This way the app, in this case ghost, would be very simple while the <em>complicated</em> security stuff happens elsewhere.<br>
The first thing I found was <a href="https://hub.docker.com/r/linuxserver/letsencrypt/?ref=thenewandshiny.com">linuxserver/letsencrypt</a>. This seemed promising. Just spin up the container and point the integrated reverse proxy to a linked ghost host. To make this short, this would have required me looking into nginx configuration. I&apos;m pretty sure that this isn&apos;t that hard and I will get back to this at some point but for now I didn&apos;t want to try out too many new things at once.</p>
<h3 id="thesolution">The Solution</h3>
<p>As is typical for my personal stereotype of developers, I turned to the very general solution to my problem next. I learned that you can build a docker compose configuration that hosts every other container spun up at some later point, provided its configured with some environment variables. This felt somewhat like building a framework that is only used by one application. Still I found it quiet interesting.<br>
I ended up using <a href="https://github.com/ekkis/nginx-proxy-LE-docker-compose?ref=thenewandshiny.com">ekkis/ekkis/nginx-proxy-LE-docker-compose</a>, as it had everything I wanted:</p>
<ul>
<li>isolated application container for easy setup</li>
<li>abstracted nginx configuration</li>
<li>automatic <em>Let&apos;s Encrypt</em> setup</li>
<li>the possibility to host multiple secured containers on one host.</li>
</ul>
<p>The result can be found on <a href="https://github.com/sqeezy/blog?ref=thenewandshiny.com">Github </a>. Let&apos;s go over the parts where I deviate from the original setup.</p>
<p><strong>docker-compose.yml</strong></p>
<pre><code class="language-YAML">nginx-proxy
...
nginx-gen:
  build: ./patched_nginx_gen
  image: patched-nginx-gen
... 
nginx-ssl
... 
</code></pre>
<p>This uses the following Dockerfile:</p>
<p><strong>./patched_nginx_gen/Dockerfile</strong></p>
<pre><code>FROM jwilder/docker-gen
COPY ./nginx.tmpl /etc/docker-gen/templates/nginx.tmpl
</code></pre>
<p>This just fixes the problems that ekkis is talking about at the end of his Readme. The nginx template gets copied to the appropriate position in the docker-gen container. The &quot;gen&quot; here stands for generation (of config files).</p>
<p>Now with all this set up we can start any other docker container, and with the right environment variables set, the cluster of containers gets you a certificate for your application and serves it under the specified domain. This means of course there is one more prerequisite to this whole setup. You need your own domain and it has to be pointed at the URL of the docker host we are using for the reverse proxy compose construct and the served applications.</p>
<p>Now the critical environment variables as seen for example in my ghost definition for this blog:</p>
<h3 id="theactualappsetup">The actual App-Setup</h3>
<p><strong>./ghost/docker-compose.yml</strong></p>
<pre><code class="language-YAML">version: &apos;3&apos;

services:
  ghost:
    image: ghost:alpine
    container_name: ghost-blog
    volumes:
      - $GHOSTCONTENT:/var/lib/ghost/content
    expose:
      - &quot;2368&quot;
    environment:
      - url=https://$DOMAIN
      - VIRTUAL_HOST=$DOMAIN
      - LETSENCRYPT_HOST=$DOMAIN
      - LETSENCRYPT_EMAIL=admin@antonherzog.com
    networks:
      - nginx-proxy

networks:
  nginx-proxy:
    external: true
</code></pre>
<p>Lets got over this file from top to bottom. It binds a folder on the docker hosts filesystem as the content folder for the ghost blog. In addition it exposes the port ghost works on &quot;2368&quot;. I think this is redundant because the <em>ghost:alpine</em> image already exposes this port, but I thought it is more clear to explicitly have it in your definition of the app. The first environment variable is also related to ghost. It uses <em>url</em> to know to which place to redirect you with links on the blog itself.<br>
Now to the juicy part. <code>VIRTUAL_HOST</code> is used by the nginx proxy container to know which incoming request should got to the app. In my case this would be the value in $DOMAIN and it would be mapped to port 2368 of the ghost container.<br>
The next two variables are for the <em>Let&apos;s Encrypt</em> setup. You have to specify your domain and you can name an email to contact you. As is suggested in the <a href="https://github.com/ekkis/nginx-proxy-LE-docker-compose?ref=thenewandshiny.com">original repo</a> you can now use <code>docker logs -f nginx-ssl</code> to see the certification process in action. This is still a nailbiter for me, because if seen it fail so much, when tried this setup for the first time. The last thing here is the entry under <code>networks</code>. This definition is used so every container we use in the general setup as wall as any app-container is working in the same context. For that to work, there has to be a network with name &apos;nginx-proxy&apos; present. For that purpose I added a bashscript under <strong>./init_network.sh</strong>.</p>
<h3 id="thepitfalls">The Pitfalls</h3>
<p>Here I will now list the biggest problems hindering me from deploying this, not so complicated, setup:</p>
<h4 id="configfilegenerationencoding">Config File Generation/Encoding</h4>
<p>This should not be a thing you have to say in my opinion, but keep in mind on which OS you are working and whether your encodings are compatible when generating files. In my case I downloaded first the main <strong>docker-compose.yml</strong> file and then the <strong>./pachted-nginx-gen/nginx.tmpl</strong> using a copy-pasted command from Github.<br>
<code>curl -O https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl</code><br>
Seems easy enough, doesn&apos;t it. Because I was working in Windows/Powershell at the time I had to replace <code>-UseBasicParsing</code> and appended <code>&gt; nginx.tmpl</code> or so I thought. This was horribly wrong in multiple ways:</p>
<p>Firstly &apos;curl&apos; under linux returns the content of the webrequest you send, where the Powershell-Alias &apos;curl&apos; is actually just Invoke-WebRequest. This means the return in Powershell is a PSObject with a lot of properties like...</p>
<pre><code>StatusCode        : 200
StatusDescription : OK
Content           : {{ $CurrentContainer := where $ &quot;ID&quot; .Docker.CurrentContainerID | first }}

                    {{ define &quot;upstream&quot; }}
                        {{ if .Address }}
                                {{/* If we got the containers from swarm and this container&apos;s port is published...
RawContent        : HTTP/1.1 200 OK
                    Content-Security-Policy: default-src &apos;none&apos;; style-src &apos;unsafe-inline&apos;; sandbox
                    Strict-Transport-Security: max-age=31536000
                    X-Content-Type-Options: nosniff
                    X-Frame-Options: deny
                    X...
Forms             :
Headers           : {[Content-Security-Policy, default-src &apos;none&apos;; style-src &apos;unsafe-inline&apos;; sandbox],
                    [Strict-Transport-Security, max-age=31536000], [X-Content-Type-Options, nosniff], [X-Frame-Options,
                    deny]...}
Images            : {}
InputFields       : {}
Links             : {}
ParsedHtml        :
RawContentLength  : 16288
</code></pre>
<p>So I was happily writing this whole datatable into <code>nginx.tmpl</code> and it took me longer than I&apos;m willing to say to notice that. I then fixed the command to</p>
<pre><code>Invoke-WebRequest -UseBasicParsing https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl `
    | Select -ExpandProperty Content `
    &gt; nginx.tmpl.
</code></pre>
<p>Sadly I still got errors in the realm of <code>can&apos;t read content of &apos;nginx.tmpl:4&apos;</code>. At this point I learned that <em>Let&apos;s Encrypt</em> has a fail-rate-limit when certificate aquisition fails. This forced me to leave the system for a while and come back later when I wasn&apos;t <em>persona non grata</em> anymore.<br>
This of course helped me to take a step back. In the end the problem is the encoding which <code>Invoke-WebRequest</code> uses for its content: Utf16LE. The little <code>&gt;</code> I used in the command just keeps the encoding of the string it gets and writes it into the file, as he should. When using <code>curl</code> you get Utf8 encoded files. So, finally, I managed to get a working command to get my files from Github:</p>
<pre><code>Invoke-WebRequest -UseBasicParsing https://raw.githubusercontent.com/jwilder/nginx-proxy/master/nginx.tmpl `
    | Select -ExpandProperty Content `
    | Set-Content -Encoding UTF8 nginx.tmpl
</code></pre>
<p>This whole problem in my opinion is a good argument for scrapping all the alias definitions in Powershell. The behaviour of <code>curl</code> and imposter <code>Invoke-WebRequest</code> is not equal in any sense, not even without additional arguments. There is hope though. I for a while now always try to use written out Powershell-CmdLets, as it is more readable in scripts, and you lern then better. The latest installment of Powershell <em>Powershell 6.X</em> which is running on dotnet-core is not using an alias for <code>curl</code> anymore. Instead it&apos;s implemented in a native way, as you can see when calling <code>Get-Command curl</code> when using Powershell 6:</p>
<pre><code>CommandType     Name       Version    Source
-----------     ----       -------    ------
Application     curl.exe   7.55.1.0   C:\WINDOWS\system32\curl.exe
</code></pre>
<p>Using this <code>curl</code> you get plain Utf8 output, so copy/paste would just have worked.</p>
<h4 id="composeupvscomposebuild">Compose-up vs. Compose-build</h4>
<p>This point is a more of a point to keep in mind. In my setup I use special build docker image for the nginx-gen container. For that I just copy a config file to the appropriate path in the docker-gen structure.</p>
<p><strong>./patched_nginx_gen/Dockerfile</strong></p>
<pre><code>FROM jwilder/docker-gen
COPY ./nginx.tmpl /etc/docker-gen/templates/nginx.tmpl
</code></pre>
<p>This image gets built the first time you call <code>docker-compose up (-d)</code>. So in my head there was this instant connection <em>docker-compose builds all referenced Dockerfiles</em>. But that&apos;s just not the case. When you call <code>docker-compose up (-d)</code> again with an image <em>patched-nginx-gen</em> on your machine it just uses that. That can produce unexpected update problem, when you try to get the image right. You either have to call <code>docker-compose build</code> or delete the current image with <code>docker images rm patched-nginx-gen</code>.</p>
<h4 id="humanerrorandmissingresearch">Human Error and missing Research</h4>
<p>The one thing I learned again is that a little bit more time spent on research before trying to hack away would save me a lot of time in the end. The hours and hours figuring the first run with <a href="https://hub.docker.com/r/linuxserver/letsencrypt/?ref=thenewandshiny.com">linuxserver/letsencrypt</a> was completely unnecessary. I think this will be recurrent theme on this blog.</p>
<h3 id="summary">Summary</h3>
<p>In the end setting up your own blog is not the biggest effort, especially when you have experience in using webservers, domains and such things. The former can be substituted for some knowledge in using docker. At some point in the future, I will get back to the setup of this blog and refine it a bit.</p>
<p>As always I&apos;d like to get some feedback. Just pm/@ me on twitter. I&apos;m happy to answer questions and take constructive criticism.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[How to host your own Blog...]]></title><description><![CDATA[...or the story of how hosting your own blog sounds pretty easy, but looking for the tools to use is hard]]></description><link>https://thenewandshiny.com/how-to-host-your-own-blog/</link><guid isPermaLink="false">65604a5f1723fe00014aa172</guid><category><![CDATA[infrastructure]]></category><category><![CDATA[docker]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Mon, 15 Oct 2018 17:10:49 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1519337265831-281ec6cc8514?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=cd3f26c8b05c9bf5298eed05c20ae44e" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><h2 id="orthestoryofhowhostingyourownblogsoundsprettyeasybutlookingforthetoolstouseishard">...or the story of how hosting your own blog sounds pretty easy, but looking for the tools to use is hard</h2>
<img src="https://images.unsplash.com/photo-1519337265831-281ec6cc8514?ixlib=rb-0.3.5&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=1080&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ&amp;s=cd3f26c8b05c9bf5298eed05c20ae44e" alt="How to host your own Blog..."><p>This article is part of a series about setting up your own blog and hosting stuff in general:</p>
<ol>
<li><a href="http://antonherzog.com/how-to-host-your-own-blog/?ref=thenewandshiny.com">How to host your own Blog...</a></li>
<li><a href="http://antonherzog.com/how-to-host-your-own-secure-blog/?ref=thenewandshiny.com">How to host your own (secure) blog...</a></li>
</ol>
<p>While I wrote up <a href="https://antonherzog.com/finally-my-own-blog/?ref=thenewandshiny.com">my personal introduction</a> introduction in this here blog, it was hosted in a small VPS as a simple docker container. The setup for that was quite nice and it can mostly be achieved by sticking to the documentation surrounding the <a href="https://hub.docker.com/_/ghost/?ref=thenewandshiny.com">ghost docker image</a>.</p>
<p>In the end this just means doing something like the following:<br>
<code>docker run --name myblog -p 80:2368 -v /my/ghost/content:/var/lib/ghost/content  ghost</code></p>
<p>But at this point I already noticed that I had no support for https-access to the page. Although I&apos;m new to hosting my own website, I knew that this wouldn&apos;t do. Due to listening to podcasts and reading a lot about the topic, I was sure that you always use encrypted connections nowadays. Even if it&apos;s just so you know how to set that up when it gets serious, which is again a big part of my motivation behind writing this blog.</p>
<p>Still at this point I was able to start writing, and so I did. If you are not concerned with enabling your readers to reach you over https, this would be almost enough. The only thing I&apos;d would advise you to do is setting up some kind of backups. In my case the volume functionallity of <a href="https://www.digitalocean.com/?ref=thenewandshiny.com">DigitalOcean</a> was the obvious thing to do. You could also use git or just plain filestorage somewhere kind of safe.</p>
<p>In the next installment of this here blog I&apos;m telling you what I ended up using for the easy https setup. I can tell you - there is <strong>a lot</strong> of possible solutions.</p>
<p>As always I would be happy seing constructive criticism.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Finally - My own Blog - Personal things]]></title><description><![CDATA[About the Author]]></description><link>https://thenewandshiny.com/finally-my-own-blog/</link><guid isPermaLink="false">65604a5f1723fe00014aa171</guid><category><![CDATA[personal]]></category><dc:creator><![CDATA[Anton Herzog]]></dc:creator><pubDate>Sat, 06 Oct 2018 07:37:16 GMT</pubDate><content:encoded><![CDATA[<p>The big B-Word is looming over my head for some years now. This is the first serious try to get it going.</p><p>First things first: What should this blog be about? Beside and during my dayjob as a desktop developer I like to try out a lot of tech. Whether it be actual physical tech like computers or software or something in between. Over time I noticed that talking or writing about my experiences helps me to cement my new skills. In addition to that it&apos;s really helpful if you are able to reference yourself in writing when someone (for example even yourself) has a question about a topic. </p><p>At this point I will give a short introduction into who I am, so feel free to skip over this part if this is of no interest to you. This introduction will appear exeptional and only as a general impression of mine. This blog will be mostly about the technical things. I feel that personal background is still helpfull when reading other peoples thoughts.</p><figure class="kg-card kg-image-card"><img src="https://thenewandshiny.com/content/images/2018/10/altstadt-bruhlova-terrace-dresden-416009.jpg" class="kg-image" alt loading="lazy"></figure><p>I grew up in Dresden in Germany, Eastern Germany to be exact. I was born in 1990 so East- and West-Germany are not countries I lived to see. It&apos;s just Germany for me. I went through highschool good enough for my own needs and then had the choice between taking year of and working or going directly to university. I chose the latter and would not recommend that to my 18 year old self today. I started studying computer science and found out that, while I liked light programming at the time, the deeper technical parts weren&apos;t all that interesting to me. After going on with my studies for 18 months I left school in order to work for a while and find out what I want to do with myself. I did this and that but ended up working in HELLERAU, an event house in the outskirts of Dresden. </p><figure class="kg-card kg-image-card"><img src="https://thenewandshiny.com/content/images/2018/10/Festspielhaus_Hellerau-_central_hall_at_daylight-_from_upper_level.jpg" class="kg-image" alt loading="lazy"></figure><p>There I worked, like you see in the picture, as a &quot;Hand&quot; constructing stage parts and setting up audio and video equipment. The time I spent in Hellerau was really eyeopening to me, in the sense that I noticed that physical work, aswell as working with something like a product in the end is so very satisfying. </p><p>When the question relating to my future education came up again, at this point I was 21 years old, I took the things I experienced and started searching for an apprenticeship on software development. At this point I have to thank the IHK (Industry and Trade Chamber) of Germany, because some years before they defined a new type of apprenticeship called MATSE. This translates to &quot;mathematical-technical-sofware-developer&quot;. It&apos;s ment to put a bigger emphasis on the maths part of software development and thus closes the gap between studying CS at the university and an apprenticeship without deeper dives into mathematics. Sounds nice doesn&apos;t it? Let me assure you, it really is. </p><figure class="kg-card kg-image-card"><img src="https://thenewandshiny.com/content/images/2018/10/EASE_Focus.png" class="kg-image" alt loading="lazy"></figure><p>After randomly stumbling upon a company with an open spot for a MATSE I also got to learn that the company in question specialises in Acoustic Simulations. This was a dreamlike combination for me, because I allready had some experience in that field, due to me working at the Festspielhaus in Dresden. Also interesting was that this company was situated in Berlin, so I got a bit of a scenery-change on top of my new job.</p><figure class="kg-card kg-image-card"><img src="https://thenewandshiny.com/content/images/2018/10/Berlin_Panorama_Mitte.jpg" class="kg-image" alt loading="lazy"></figure><p>In the next two and a half years, I got taught a lot about software development in my company as well as the school which got roughly one out of three weeks of my time. I was lucky to be working in a small company where you got to know everybody and even luckier, because all the people working there were nice and interesting.</p><p>For all those reasons I am, almost exactly six years later, still working at this company and feel so pationate about this job and the whole field of software development and tech in general, that I want to give something back and write this blog. For those and the more selfish reasons I listed above, I decided to write this blog.</p><p>This concludes my personal introduction. In the next post I am going to talk about how what I had to do to set up this blog and all the things I broke before it finally worked.</p><p>As I am quiet new to this whole writing for other people stuff, I&apos;d like to get constructive criticism about my writing. This is especially important to me, because english is not my native language.</p><p>Thank you for reading and till the next time...</p>]]></content:encoded></item></channel></rss>