DevOps is the new black

Yes, DevOps is the new black. I might not be the first to use the phrase, but it’s so obviously true.
I am currently working on building some kind of offer around DevOps, so you’ll probably see more posts on the topic.
But two things struck me recently and I decided I would make a post out of those. Both items are related to the people side of DevOps. The first is the importance of the people involved in your DevOps transformation or organization. The second, corollary to the first, is the recruitment of these people.

People matter

People are important, that’s obvious. However a customer experience has lately surfaced the importance for a successful DevOps transformation. I may not be able to go into many details but the broad outline is quite simple.
The organization is a software team, within a large company. It delivers its own product, to be used by other business units. It has decided to run its own operations. Perfect candidate for DevOps, right? Using a custom approach, based on industry standards, an Agile/DevOps organization is designed and implemented.
Fast forward one year. The transformation is quite successful, the stability and quality of the product have improved. The only thing that prevented the team to be outstanding seems to come from the people. Don’t misunderstand me, I am not judging this team and its members. But for an Agile/DevOps transformation to be successful you need the right people, with the right mindset. And not everyone fits the bill. Like some people are more comfortable in an open-space and some prefer a closed-off office. It has been the same with Agile practices, which could not apply to every situation and team.
We need to pay extra attention to the people we include in these transformation projects, if we want them to succeed.

Recruitment is crucial

As a follow-up of the above assessment, recruitment of people for your team is important. Yes, I know, it has always been important. However, take a second look at the title of this post. Done? Alright. Now have a look at the job offers in IT. See any pattern?
DevOps is written everywhere.
It is somehow justified, as DevOps encompasses many modern practices that we have or would like to implement. Take automation, continuous delivery, continuous deployment, testing, QA etc.
The issue is that not every job offer is for a DevOps team or project. Most of the offers are for traditional sysadmins or developers with a hint of DevOps. Which is a good trend, but not a good fit for a full devops profile.

So, people matter in DevOps environments, so take care of your profile 🙂

Note : this post was inspired by a LinkedIN post that I cannot find, in french, regarding the abusive use of DevOps in job postings in France. If anyone can find it, I’d love to thank and credit its author!

Cloud is for poor companies

I heard that statement from Greg Ferro (@etherealmind) in a podcast a few weeks back.

I have to admit, I was a bit surprised and had a look at Greg’s tweets and posts, while finishing up the podcast.

Of course, the catchphrase is aimed at shocking, but it is quite well defended, and I have to agree, to some point with Greg on that.

Let me try to explain Greg’s point, as far as I have understood it.

 

The IaaS/PaaS platforms, and some of the SaaS ones, are aimed at providing you with on the shelf functionalities and apps, to develop your product quicker. And also to let you focus on your own business, rather than building every expertise needed out there to support your business. However, there are some underlying truths, and even drawbacks :

  1. When you are using someone else’s “product”, you are tied to what this company will do with it. For example, if you were not a Citrix shop, and wanted to use Microsoft Remote Desktop on Azure… you are stuck as MS has discontinued RDS support on Azure, in favor of Citrix. To be a little less extreme, you will have to follow the lifecycle of the product you are using in the cloud, whether it matches with your own priorities and planning. IF you stay on-premises, even with a commercial product, you can still keep an old version, admittedly without support at some point in time. If you build you own solution… then you’re the boss!
  2. The cloud services are aimed at being up and running in minutes, which helps  young companies and startups focus their meager resources on their business. And that’s good! Do you imagine starting a company today and having to setup all of your messaging/communication/office/email solution during a few weeks before being able to work for real? Of course not! You’ll probably start on Gapps, or Office 365 in a few minutes. It’s the same if you start building a software solution, IoT for example. Will you write every service you need from the ground up? Probably not. You’ll start with PaaS building blocks to manage the message queueing and device authentication. Nevertheless, as your product gains traction, and your needs become very specific, you will surely start to build your own services, to match your needs exactly.
  3. And last but not least… the cloud companies are here to make money. Which means, at a point in time, it will not be profitable for you to use their services, rather than build your own.

 

There might be some other drawbacks, but I would like to point out a few advantages of cloud services, even on the long term.

  1. Are you ready to invest the kind of money these companies invested in building their services and infrastructure? Granted, you might not need their level of coverage (geography, breadth of services etc.).
  2. Are you ready to make some long term plans and commitments? You need them if you want to invest and build those services yourself
  3. You might be a large, rich company, but if you want to start a new product/project, cloud services may still be a good solution, until you’ve validated that long term plan.

 

Say you’re building a video streaming service. You would start using AWS or Azure to support all of your services (backend, storage, interface, CDN etc.) But the cloud providers have built their services to satisfy the broader spectrum, and they may not be able to deliver exactly what you need. When your services are more popular, you will start building your own CDN, or supplement the one you have with a specific caching infrastructure, hosted directly within the ISP infrastructure. Yes, this is Netflix 😀

 

Any thoughts on that?

Velocity London ’17 – content

I already posted about this event a few weeks ago, with a focus around my experience and the organization : https://cloudinthealps.mandin.net/2017/11/03/velocity-london-2017/

This time, I would like to share a short summary of what I have learned during these 4 days.
The first two days were a Kubernetes training, so nothing very specific here. I learnt a lot about Kubernetes, which is to be expected 🙂

During the two conference days, I attended the keynotes, and several sessions.
The keynotes are difficult to sum up, as they were very different, and each was a succession of short talks. I attended several large-scale conferences in the past, and that was the first time that I felt that the speakers were really on the edge of research and technology. They were not specifically here to sell us their new product, but to share where their work was headed, what the outcomes could be etc.
They broached subjects ranging from bio-software to chaos engineering, from blockchain to edge computing. Some talks were really oriented toward IT & DevOps, and some were bringing a completely different view on our world.
Overall, it felt energizing to hear some many brilliant minds talk about what is mostly our future!The sessions were a bit more down to earth and provided with data, content and feedbacks that would bring us some changes back home. I was surprised to have most sessions concentrate on general information and feedback, and not so much on specific tools and solutions. I expected more sessions from the toolchains for DevOps (Chef, Puppet, Gitlab, Sensu and so on). Actually, even when the session were presented by these software companies (Datadog, Yahoo, Bitly, Puppet, PagerDuty) they never sold their product. However they used their experience and data to provide very useful insights and feedbacks.
What I brought back could be split into two categories : short term improvements/decisions that could be implemented as soon as I got back (which I did partly), and trends that would have to be thought about and analyzed, and then maybe crafted into a new offer or approach.
In the first category :
• Blameless post-mortems. A lot of data analyzed, with one takeout for us : keep the story focused and short. If you do not have anything to add apart from the basic timeline… maybe you’re not the right team to handle the post-mortem 🙂
• Solving overmonitoring and alert fatigue. This talk was a gamechanger for me. What Kishore Jalleda (https://twitter.com/KishoreJalleda) stated was this : you may stop monitoring applications and services that are not respectful. For example, if you get more than X alerts everyday from an application, you may go to the owner of the application and say “as you are generating too much noise, we will disable monitoring for a moment, until the situation comes back to something that is manageable by the 24*7 team” Of course you have to help the product team get back on track and identify what is monitoring and what is alerting (https://cloudinthealps.mandin.net/2017/05/12/monitoring-and-alerting/). And you need top management support before you go and apply that 🙂
• On the same topic, a session about monitoring containers came down to the same issue : how do you monitor the health of your application? Track the data 🙂

The second group covered mostly higher level topics, on how to organize your teams and company for successful DevOps transformation. I noted an ever spreading use of the term “SRE”, which I would qualify of misused most of the time. At least SRE seems now to qualify any team/engineer in charge of running your infrastructure.
Another trend, in terms of organization, was the model based on this famous SRE team, to provide tooling and best practices for each DevOps/Feature/Product team. I’ll probably post at length sometime later.

To be certified or not to be certified?

I have been pushing my team to get certified on Azure technologies for the past 24 months, with various degrees of success. I am quite lucky to have a team who does not discuss the value of the certification, however much they discuss the relevance of the questions.
But, as I am now going over almost 15 years of certifications in IT, I feel quite entitled to share my views and opinion.
Keep in mind that I work in infrastructure/Operations, and in France, which will probably give some bias to my analysis 🙂

I will start with some general comments on the value of certifications, from a career perspective, and dive into some specifics for each vendor I have certified with over the years. Some of my exams are a bit dated, so please be nice. I will conclude with my general tips to preparing for an exam.

As I said, it’s been almost 15 years since my first cert, and I started that one before even being employed, that gives me some insight about the relevance of such investment in my career. I took my first dip into the certification world during a recruitment process with a consulting company. We were two candidates, I was the young guy and the other one was already holding his Microsoft MCP. I felt, at that time, that I could benefit from one myself, and compensate some of my lack of experience with it. As I registered for my first MCP exam, for Windows 2000 (!), I was contacted to get into a kickstart program to get my certification level up to Microsoft MCSE, everything started from there.
After a few months, I passed the final MCSE exam (out of 7 at that time) and was recruited, to work on Cisco networking, which had nothing to do with my skills, by the very same company that had interviewed me when I discovered the MCP. I still think that the fact that I went through the certification path did a lot to convince my boss to be of my motivation and ability to work hard. Over the years I refreshed my MCSE with each version of Windows (from 2000 to 2016) and added a few new ones, depending on what I worked on at my positions : Cisco, RedHat, Vmware and Prince2.

Even though it was not obvious in my first job, the following ones were pretty clear cases where my certifications held some value to my employer. We discussed the fact during some of the interviews rather openly. And I was in a recruiter’s shoes myself a few times, and here is why I feel is useful regarding the certifications.
First it show that you can focus on sometimes gruesome work, for a while. Passing these kind of exams almost always forces you to learn tons of new information, on software or devices that you maybe never handle.
Then it show dedication to maintain them over time, when they have at least some value to your current position.
And, let’s be candid, it show you can take one for the team, because almost every vendor partnership requires some level of certification.
And, as I said, I know for a fact that I had been recruited, at least partly, twice thanks to my certs.
On the salary part, I am not definite on the impact of certifications. I do not feel that the cert plays a part there, but I cannot prove or disprove it.

That being said, when you take one of these exams, you will experience very different things depending on the vendor, and sometimes on the level of certification. Let’s take a closer look.
We’ll start with my longest running candidate : Microsoft. Apart from one beta test ten years ago, I always had some kind of MCQ with them. You may have some variation around that : drag and drop, point and click etc. But, by and large nothing close to a simulator or designer. This had led to a bad reputation a while ago, when you may have had an MCSE (which was like the Holy Grail of Microsoft certification) while having absolutely no hands-on experience with Windows. They have kept the same format for Azure exams, and are taking some heat also, because the exams are deprecated almost as they go out. I am wondering whether they are working on some other way to certify.
Cisco had a router/switch simulator for a long time, which had brought some rather interesting exams, for the lowest levels. I only took the CCNA 15 years ago, so I do not know how it goes for higher levels. The only caveat, from my perspective, was that the simulator did not allow for inline help and auto-completion, which you still have in real life.
RedHat, for the RHCE exams, had the most interesting experience in my view. The exam was completely in a lab, split in two sections. First you had to repair a broken RHEL server, three times. Then you were given a list of objectives that you had to meet with a RHEL server. You could choose whichever configuration you would prefer, as long as the requirements were met (with SELinux enforced, obviously 🙂 ). You had a fully functional RHEL, with the man pages and documentation, but without an internet access. I still feel to that day that this way let you prove that you really were knowledgeable and had the necessary skills to design and implement a Linux infrastructure. And the trainers were always fun and very skilled.

I also certified on Vmware Vsphere for a while and that brought me to a whole new level of pain. The basic VCP level is fine, just along the same lines as an MCP. But when I started to study for the next level, VCAP-DCD (which stands for Vmware Certified Advanced Professionnal-DataCenter Design), I had to find some new ways of preparing and learning. You see, where a usual exam requires you to learn some basic stuff by heart (like the default OSPF timers, or the minimum Windows 2000 workstation hardware requirements) it was still a limited scope. For this exam, you had to be able to completely design a Vsphere infrastructure, along the official Vmware guidelines, form all of the perspective (Compute, Storage, Network). And there were just a few MCQ. The most questions were just designs, where you had to draw, Visio-style, the required infrastructure, depending on a list of existing constraints and requirements. Believe me, it is one of the few exams where you truly need to manage your time also. I am a fast thinker, and had always completed all of my exams under 25% of the total allotted time. My first take at VCAP-DCD, I almost did not finish in the 3h30 I had. And I failed. There has been talks in the certification world that this exam was the second hardest in the world (at least in its version 5), at that level.

Over all of these exams, I can share some advice that is quite general, and absolutely not ground breaking. First you need to work to pass the exam. Get to know the blueprint of the exam and identify whether there are some areas you do not know about. Watch some videos, practice in labs, learn the recommendations from the vendor. Second, get a decent practice exam, to get used to the form and type of questions and also to check whether you’re ready to register for the real thing. And last : work again, read, practice, discuss if you can with some other people preparing the same exam, or someone who already took it. We are not at liberty to discuss everything in an exam, but at least we can help.

The short version is : get certified, it is worth it, at least in France where most of IT people do not take the time to go through the exercise.

Velocity London 2017

This October I had the opportunity to go at the Velovity conference in London (https://conferences.oreilly.com/velocity/vl-eu). The exact title of the conference is “Build and maintain complex distributed systems”. That’s an ambitious subject. The event had been suggested by a customer who went to one of the US editions and found that it was a brilliant event, both in terms of DevOps subjects covered and in terms the attendees & networking. So here I am, back in London, for 4 days of DevOps and cloud talks.

I have started the conference with a special 2-days training on Kubernetes, by Sebastien Goasgen (@sebgoa)

The training was really intense, as Sebastien described many standard objects and tools of the platform, as well as a few custom options that we can use. We played with Minikube on our laptops, which is a really great way to have the experience of a Kubernetes cluster, in a small box. It was really packed, and we had to rush to keep up with Sebastien’s tests and labs, even with his Github repo containing most of the scripts and K8s manifests. I came out of those days a bit tired by all the things I had learned and tested, and a long list of new functions and tools to try, new ideas to explore etc, it was immensely fun, thank Sebastien!

The conference itself was rather overwhelming, and a big surprise for me. I am used to large conferences like Vmworld or Tech-Ed where you get the good word for the year to come from an editor and its ecosystem. Most of the sessions in those are obviously diving into the products and how to use them.
At Velocity, almost all the keynote speakers were somehow working in research, or such bleeding edge domain that it might not even exist yet. I loved being presented with what might be happening, and by people who are scientists at heart, not just marketing infused with a light touch of technology. Moreover the sessions themselves were mostly feedback on the speakers own experience on a specific domain/issue/subject. Usually they are not into a particular tool or software suite, but rather on how to make things work with DevOps and large distributed systems.

Overall, I I really enjoyed this conference, because it very well organized, and small enough to be a human experience. As we have four days with the same group of around 400 people (227 to this day according to the attendee directory), in a rather small area, you cross path often with the same people, and it makes it easy to start conversations. Also they came up with a lot of ribbons that you can attach to your badge, to let other know what you are here for :

I was used to much larger conferences, where I always found the networking a bit difficult, if you did not know a few people beforehand. In Velocity’s case, it is so easy, you have only a handful of attendees and speakers, and you can meet everyone informally, during lunch breaks or just by asking. It came as a surprise for me to able just to chat with some of the speakers that have impressed me, like Juergen Cito and Kolton Andrus just by going and sit with them at a lunch table.

I’ll probably write about what I learned and took away from this conference in the near future, so stay tuned!

New security paradigms

Obviously you have heard a lot of talk around security, recently and less recently.
I have been in the tech/IT trade for about 15 years, and every time I have met with a new vendor/startup, they would start by saying that we did security wrong and they could help us built Next Gen security.

I am here to help you move to the Next Gen 🙂
All right, I am not. I wanted to share a short synthesis of what I have seen and heard over the past months around security in general, and in the public cloud in particular.
There are few statements I did find interesting :
• Perimetric lockdown, AKA perimeter firewalls, is over.
• No more need for IDS/IPS, in public cloud, you just need clean code (and maybe a Web Application Firewall)
• Public cloud PaaS services are moving to an hybrid mode delivery

Of course, these sentences are not very clear, so let me dig into those.

First, perimeter security. The “old” security model was built lake a medieval castle, with a strong outer wall, and some heavily defended entry points (Firewalls) There were some secret passages (VPNs), and some corrupted guards (Open ACLs 🙂 ).

Herstmonceux Castle with moat

This design has lived and is not relevant any more. It is way too difficult to manage and maintain thousands of access lists, VPNs, exceptions and parallel Internet accesses, not mentioning the hundreds of connected devices that we have floating around.

A more modern design, for enterprise networking, often relies on device security and identity management. You will still need some firewalling around your network, just to make sure that some dumb threat cannot go in by accident. But the core of your protection, networking-wise, will be based on a very stringent device policy that will allow only safe devices to connect to your resources.
This solution will also require that you have a good identity management, ideally with some advanced threat detection in place. Something that can tell you when some accounts should be deactivated/expired, or when you have abnormal behavior : for example, two connections attempts for the same account, from places thousands of kilometers apart.
For those who have already setup 802.1X authentication and Network Access Control on the physical network for workstations know that it requires good discipline and organization to work properly and not hamper actual work.
To complete the setup, you will need to secure your data itself, ideally using a solution that manages the various levels of confidentiality, and can also track the usage and distribution of the documents.

As I said No more need for IPS/IDS. Actually, I think that I have never seen a real implementation that was used in production. Rather there was almost always an IPS/IDS somewhere on the network, to comply with the CSO’s office requirement, but nothing was done with it, mostly because of all the generated noise. Do not misunderstand me, there are surely many true deployments in use that are perfectly valid! But for a cloud application, it is strange to want to get down to that level where your cloud provider is in charge of the lower infrastructure levels. The “official” approach is to write clean code, to make sure that your data entry points are protected and then to trust the defenses in place from your provider.
However, as many of us do not feel comfortable enough to skip the WAF (Web Application Firewall) step, at least Microsoft has heard the clamor and will add the possibility to connect a WAF in front of your App Service shortly. Note here : it is already possible to insert a firewall in front of an Azure App Service, but this requires a Premium service plan, which will come at a *ahem* premium price.

And that was my third point : PaaS services coming to a hybrid delivery mode. Usually when you look at PaaS services in the public cloud, they tend to have public endpoints. You may secure these endpoints with ACLs (or NSG for Azure), but this might not be very easy to do, for example if you do not have a precise IP range for your consumers. This point had been discussed and worked on for a while, at least at Microsoft, and we are now seeing the first announcements for PaaS services that are usable through a Vnet, and thus private IP. This leads to a new model, where you may use these services, Azure SQL for example, for your internal applications, through a Site-To-Site VPN.

These statements are subject to discussion, and will not meet every situation, but I think they are a good conversation starter with a customer, aren’t they?

Testing days, K8S, Minikube and Azure Stack

As I was getting ready for Velocity conference (https://conferences.oreilly.com/velocity/vl-eu) and the Kubernetes training by Sebastien Goasguen, I happened to be captured by a spiral of testing.

First, I needed to have a K8s cluster running for said training. Sebastien suggested Minikube, which is a nifty way of having a local K8s cluster on your workstation and play with it. As it was too easy, I went through my K8s the hard way (http://cloudinthealps.mandin.net/2017/09/14/kubernetes-the-hard-way-revival/) on Azure again to be able to work on the real stuff, and use kubectl from my Linux env (embedded in windows 10). And I realized that I might have internet issues during the conference and would be happy to have Minikube running.

So back to square one and to setting up minikube and kubectl properly on Windows.
I tried the easy way, which was to download Minikube for Windows and run it. It obviously failed, and I could not find out why. After some try and fails, I just updated Virtualbox, which I was already using for personnal stuff. I just had then to rest the minkube setup that I had, with “minikube delete” and then start fresh : “minikube start” and voilà, I had a brand new Minikube+kubectl setup fully on Windows 10 (and a backup on Linux and Azure).

But as I was working on that, I stumbled on a news there about Azure Stack (https://social.msdn.microsoft.com/Forums/azure/en-US/131985bd-bc56-4c35-bde8-640ac7a44299/microsoft-azure-stack-development-kit-201709283-now-available-released-10102017?forum=AzureStack) and specifically the AS SDK, which allows for a one node setup of Azure Stack.

This tickled my curiosity gene. A quick Google to find if there was any tutorials or advice on running nested Azure Stack on Azure, and here am I, setting up just that.
Keep in mind that the required VM (E16s-V3) is just above 1€/hour, which means 800€ monthly, so do not forget the auto-shutdown if you need to control your costs 🙂
The guide I followed is there : https://azurestack.blog/2017/07/deploy-azure-stack-development-kit-on-an-azure-vm/
I did almost everything using the Azure portal, maybe it might be useful to build a script to do that more quickly.
Note that the email with the download link takes some time to be sent, so you might start with that. Or you can use the direct link : https://aka.ms/azurestackdevkitdownloader

And this first test did not work out the way I expected. There were many differences between the article, the official doc, and what I encountered while deploying. Back again to first step, I redeployed the VM, redownloaded the SDK, and started from scratch, following the official doc (https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-deploy), I just added the tweak to skip the Physical host check, in order for the installation to continue even though it was running on a VM.

After a few hours, Voilà I had a fully running Azure Stack, within an Azure VM!
Now I just have to read the manual and play with it. This’ll be the subject of a future post, keep checking!

My experience @ Experiences’17

It has been a long two-days event for Microsoft France.
I wanted to summarize this event and what happened during those two days.
I will not be extensive about all the announcements and sessions that were offered.
This will just be my experience (pun intended) of the event.

This year I did not present a session, mainly because the process to submit one was very unclear, and I did not want to fight against smoke. And last precision, it was only my second Experiences, and I never attended its predecessor, Techdays.

As I said, it is a two-days event, split between a business day and a technical day. I attended both, as my role is also split between the two aspects. I found that the distinction was very visible regarding the content of the various sessions, apart from the keynotes (and Partner Back to School session). Overall the technical level is rather low, but most of MS staff is onsite and you can have very interesting discussions with them, as well with the other attendees.

A word on the attendees : there are very different groups in there. I have met with numerous Psellers and MVPs, as well as Microsoftees. Obviously, there are many customers and partners around, some of them just for show, some with a very specific project/problem in mind. And there are people that I am not accustomed to see in business events, but who bring a refreshing variety to the general attendee population. These are both students from multiple schools (engineering, but not only), and employees who managed to get their managers to approve because the event is free.
I am not sure whether it is the case in other countries, but in France we usually have difficulties getting approval to travel abroad and pay for a conference. It is not always true with every company, but it has been widespread enough that some European-wide events are replicated to a smaller scale in France to allow French techies to get the content as well.

Back to the event itself, the rhythm was rather intense this year and I missed many sessions, to be able to meet and discuss with everyone I wanted to. As it is with all event, the quality of a session is very dependent on the quality of the speaker. The ones I attended were very good and made a lot of effort to stay focused on their topic and keep everyone on board.
About the keynotes, well they were of the expected quality, on par with Inspire, with several videos, demos, interviews etc. As was the case with Ignite, some talks were highly specific (to AI or Quantum computing) and made me believe that Satya Nadella is taking some moves from Elon Musk. It was very different from the Tech’Ed days were we were shown the new interface for System Center, or a new tablet.

The buzzword this year at Experiences was AI (it was Blockchain last year). I have to admit that the AI Hackademy included some very interesting ideas and startups. I did not manage to visit them all but I was pretty impressed to see so many startups working on the subject, and bringing fresh ideas and concepts to our world.

All right, everything was very positive, I am convinced. I will share one mildly negative thought though : AI was sometimes thinly stretched over a piece of software or idea. I’ve seen some interesting uses of statistics, or even good programming and algorithms, but to say these were truly AI was going a bit far. At least that’s my opinion, but we may not all have the same definition… as for what is a cloud 🙂

https://experiences.microsoft.fr/business/experiences-17-morceaux-choisis/

Kubernetes the hard way, revival

This week, I had my first troubles with GitHub while trying to push all the updates I did to “Kubernetes, the hard way” to use Azure. Long story short, I did have to ditch everything I did write for the new guide, and start over, as there were too many new commits from Kelsey’s guide.
This misadventure pushed me to do several things :
1) Create and maintain my own fork of Kelsey’s guide : https://github.com/frederimandin/kubernetes-the-hard-way
2) Rewrite this guide, and make it work on Azure
3) Use Visual Studio Code, with the GitHub and Markdown plugins. As it was also a first… some pain was involved.
4) Go several steps beyond, in order to play a bit more with K8S

The first two steps are done and commited, as you may see on GitHub. It did take a smaller amount of work than expected, as most of the commands I wrote for the previous guide were still usable. I did have to redeploy the test K8S cluster to confirm that everything was fine. Please, if you have some spare time, do not hesitate to use this guide, and give me some feedback!

Then I tried several test, in addition to the ones included in the guide.
First I deployed a container from the Docker public Registry : https://hub.docker.com/r/apurvajo/mariohtml5/
This went quite well, and I had the infinite mario running for several hours, and accessible from the outside world on its own port.

At that point I got lost… I started to update this blog, and realized that the website was not using HTTPS. I figured now would be good time to do it, and I thought about using Let’s Encrypt (https://letsencrypt.org). As it was my first time, it took me a while to find out what to do exactly. Actually, the easiest way was to just activate the extension for the web app on Azure, and follow the guide. We are now securely discussing on https://cloudinthealps.mandin.net 🙂

That was fun, but I still have not started to play with Helm (https://www.helm.sh), which was the original idea.
Ill have to postpone that activity and blog about it later!

Kubernetes and Azure Container Instances

Following my recent ventures in the Kubernetes world (http://cloudinthealps.mandin.net/2017/08/23/kubernetes-the-hard-way-azcli-style/), I now had a functional Kubernetes cluster, on Azure, built with my own sweat and brain.

But that was not the original goal. The original goal was to try and play with Azure Container Instances, and its Kubernetes connector https://azure.microsoft.com/fr-fr/resources/videos/using-kubernetes-with-azure-container-instances/

Following the guide on GitHub was relatively straightforward and painless (https://github.com/Azure/aci-connector-k8s), but I encountered two small issues.
One was that I am not completely comfortable yet with all things K8s, and I had to read a bit about taints, to understand why the current ACI connector is not used by the K8s scheduler by default. Not a big deal, and a good way to get to know more about K8s.
The second one was maybe due to the fact that I had never used ACI before, maybe not. I logged it into the GitHub project as an issue (https://github.com/Azure/aci-connector-k8s/issues/33) to make sure that it is taken into consideration.
Short story was that I was missing a registered Provider in my subscription. However the error message did not pop up in kubectl output, but only on the Activity log in Azure portal. Another good occasion to learn an dig into some tools.