Testing days, K8S, Minikube and Azure Stack

As I was getting ready for Velocity conference (https://conferences.oreilly.com/velocity/vl-eu) and the Kubernetes training by Sebastien Goasguen, I happened to be captured by a spiral of testing.

First, I needed to have a K8s cluster running for said training. Sebastien suggested Minikube, which is a nifty way of having a local K8s cluster on your workstation and play with it. As it was too easy, I went through my K8s the hard way (http://cloudinthealps.mandin.net/2017/09/14/kubernetes-the-hard-way-revival/) on Azure again to be able to work on the real stuff, and use kubectl from my Linux env (embedded in windows 10). And I realized that I might have internet issues during the conference and would be happy to have Minikube running.

So back to square one and to setting up minikube and kubectl properly on Windows.
I tried the easy way, which was to download Minikube for Windows and run it. It obviously failed, and I could not find out why. After some try and fails, I just updated Virtualbox, which I was already using for personnal stuff. I just had then to rest the minkube setup that I had, with “minikube delete” and then start fresh : “minikube start” and voilà, I had a brand new Minikube+kubectl setup fully on Windows 10 (and a backup on Linux and Azure).

But as I was working on that, I stumbled on a news there about Azure Stack (https://social.msdn.microsoft.com/Forums/azure/en-US/131985bd-bc56-4c35-bde8-640ac7a44299/microsoft-azure-stack-development-kit-201709283-now-available-released-10102017?forum=AzureStack) and specifically the AS SDK, which allows for a one node setup of Azure Stack.

This tickled my curiosity gene. A quick Google to find if there was any tutorials or advice on running nested Azure Stack on Azure, and here am I, setting up just that.
Keep in mind that the required VM (E16s-V3) is just above 1€/hour, which means 800€ monthly, so do not forget the auto-shutdown if you need to control your costs 🙂
The guide I followed is there : https://azurestack.blog/2017/07/deploy-azure-stack-development-kit-on-an-azure-vm/
I did almost everything using the Azure portal, maybe it might be useful to build a script to do that more quickly.
Note that the email with the download link takes some time to be sent, so you might start with that. Or you can use the direct link : https://aka.ms/azurestackdevkitdownloader

And this first test did not work out the way I expected. There were many differences between the article, the official doc, and what I encountered while deploying. Back again to first step, I redeployed the VM, redownloaded the SDK, and started from scratch, following the official doc (https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-deploy), I just added the tweak to skip the Physical host check, in order for the installation to continue even though it was running on a VM.

After a few hours, Voilà I had a fully running Azure Stack, within an Azure VM!
Now I just have to read the manual and play with it. This’ll be the subject of a future post, keep checking!

My experience @ Experiences’17

It has been a long two-days event for Microsoft France.
I wanted to summarize this event and what happened during those two days.
I will not be extensive about all the announcements and sessions that were offered.
This will just be my experience (pun intended) of the event.

This year I did not present a session, mainly because the process to submit one was very unclear, and I did not want to fight against smoke. And last precision, it was only my second Experiences, and I never attended its predecessor, Techdays.

As I said, it is a two-days event, split between a business day and a technical day. I attended both, as my role is also split between the two aspects. I found that the distinction was very visible regarding the content of the various sessions, apart from the keynotes (and Partner Back to School session). Overall the technical level is rather low, but most of MS staff is onsite and you can have very interesting discussions with them, as well with the other attendees.

A word on the attendees : there are very different groups in there. I have met with numerous Psellers and MVPs, as well as Microsoftees. Obviously, there are many customers and partners around, some of them just for show, some with a very specific project/problem in mind. And there are people that I am not accustomed to see in business events, but who bring a refreshing variety to the general attendee population. These are both students from multiple schools (engineering, but not only), and employees who managed to get their managers to approve because the event is free.
I am not sure whether it is the case in other countries, but in France we usually have difficulties getting approval to travel abroad and pay for a conference. It is not always true with every company, but it has been widespread enough that some European-wide events are replicated to a smaller scale in France to allow French techies to get the content as well.

Back to the event itself, the rhythm was rather intense this year and I missed many sessions, to be able to meet and discuss with everyone I wanted to. As it is with all event, the quality of a session is very dependent on the quality of the speaker. The ones I attended were very good and made a lot of effort to stay focused on their topic and keep everyone on board.
About the keynotes, well they were of the expected quality, on par with Inspire, with several videos, demos, interviews etc. As was the case with Ignite, some talks were highly specific (to AI or Quantum computing) and made me believe that Satya Nadella is taking some moves from Elon Musk. It was very different from the Tech’Ed days were we were shown the new interface for System Center, or a new tablet.

The buzzword this year at Experiences was AI (it was Blockchain last year). I have to admit that the AI Hackademy included some very interesting ideas and startups. I did not manage to visit them all but I was pretty impressed to see so many startups working on the subject, and bringing fresh ideas and concepts to our world.

All right, everything was very positive, I am convinced. I will share one mildly negative thought though : AI was sometimes thinly stretched over a piece of software or idea. I’ve seen some interesting uses of statistics, or even good programming and algorithms, but to say these were truly AI was going a bit far. At least that’s my opinion, but we may not all have the same definition… as for what is a cloud 🙂

https://experiences.microsoft.fr/business/experiences-17-morceaux-choisis/

Kubernetes the hard way, revival

This week, I had my first troubles with GitHub while trying to push all the updates I did to “Kubernetes, the hard way” to use Azure. Long story short, I did have to ditch everything I did write for the new guide, and start over, as there were too many new commits from Kelsey’s guide.
This misadventure pushed me to do several things :
1) Create and maintain my own fork of Kelsey’s guide : https://github.com/frederimandin/kubernetes-the-hard-way
2) Rewrite this guide, and make it work on Azure
3) Use Visual Studio Code, with the GitHub and Markdown plugins. As it was also a first… some pain was involved.
4) Go several steps beyond, in order to play a bit more with K8S

The first two steps are done and commited, as you may see on GitHub. It did take a smaller amount of work than expected, as most of the commands I wrote for the previous guide were still usable. I did have to redeploy the test K8S cluster to confirm that everything was fine. Please, if you have some spare time, do not hesitate to use this guide, and give me some feedback!

Then I tried several test, in addition to the ones included in the guide.
First I deployed a container from the Docker public Registry : https://hub.docker.com/r/apurvajo/mariohtml5/
This went quite well, and I had the infinite mario running for several hours, and accessible from the outside world on its own port.

At that point I got lost… I started to update this blog, and realized that the website was not using HTTPS. I figured now would be good time to do it, and I thought about using Let’s Encrypt (https://letsencrypt.org). As it was my first time, it took me a while to find out what to do exactly. Actually, the easiest way was to just activate the extension for the web app on Azure, and follow the guide. We are now securely discussing on https://cloudinthealps.mandin.net 🙂

That was fun, but I still have not started to play with Helm (https://www.helm.sh), which was the original idea.
Ill have to postpone that activity and blog about it later!

Kubernetes and Azure Container Instances

Following my recent ventures in the Kubernetes world (http://cloudinthealps.mandin.net/2017/08/23/kubernetes-the-hard-way-azcli-style/), I now had a functional Kubernetes cluster, on Azure, built with my own sweat and brain.

But that was not the original goal. The original goal was to try and play with Azure Container Instances, and its Kubernetes connector https://azure.microsoft.com/fr-fr/resources/videos/using-kubernetes-with-azure-container-instances/

Following the guide on GitHub was relatively straightforward and painless (https://github.com/Azure/aci-connector-k8s), but I encountered two small issues.
One was that I am not completely comfortable yet with all things K8s, and I had to read a bit about taints, to understand why the current ACI connector is not used by the K8s scheduler by default. Not a big deal, and a good way to get to know more about K8s.
The second one was maybe due to the fact that I had never used ACI before, maybe not. I logged it into the GitHub project as an issue (https://github.com/Azure/aci-connector-k8s/issues/33) to make sure that it is taken into consideration.
Short story was that I was missing a registered Provider in my subscription. However the error message did not pop up in kubectl output, but only on the Activity log in Azure portal. Another good occasion to learn an dig into some tools.

Kubernetes, the hard way, AZCLI style

Finally a tech post!

I have been busy this week, on command lines and Kubernetes.

The starting point was the recent announce for Azure Container Instances and the related Kubernetes conenctor : https://github.com/azure/aci-connector-k8s

I admit I did try what Corey Sanders showed in his show : https://channel9.msdn.com/Shows/Tuesdays-With-Corey/Tuesdays-with-Corey-Azure-Container-Instances-with-WINDOWS-containers. However what I found interesting and wanted to try was the ACI connecter to Kubernetes, and how we would work with that.

Of course we have a test Kubernetes cluster here, that someone from our tema built, but it felt too easy just to add the connector. Also I am not comfortable yet with Kubernetes and I wanted to get my hands dirty and know more about the inner workings of a k8s cluster.

I remembered a quote from the Geek Whisperers’ show featuring Kelsey Hightower. He said that he wrote a guide to build a K8s cluster from the ground up, without any shortcuts. The guide is found there : https://github.com/kelseyhightower/kubernetes-the-hard-way

The downside is that the guide is aimed at Google Cloud Platform, and I am an Azure guy.

And there was my pet project for this week : adapt the guide for Azure, using only Azure CLI commands!

There was one final trick for me to learn : store and share all that on GitHub. As I never had to work with Git by myself, it was also a good way to learn the moves.

So, lots of new stuff learnt :

  • Create a K8s cluster from scratch
  • GitHub, and Git
  • Making progress on Azure CLI
  • A good refresh and Azure infrastructure

The project is hosted there : https://github.com/frederimandin/Kubernetes-the-azcli-way

There are many following steps to work on :

  • Integrating properly with Kelsey’s guide
  • Testing my own guide again
  • Adding ACI connector to my cluster and play with it (and write about it of course!)

I’ll keep you posted, of course!

Inspire ’17

We are almost halfway of the first quarter for Microsoft financial year, a month after the partner convention, which has been rebranded “Inspire”.

Now that I am not a newbie any more, I can step back a bit and see past the awe of the first event.

The setting this year was in Washington DC, which is great place for these kind of events. There are many hotels nearby, the city center is small enough to walk around, and there are many chic places for the evenings.

This is not a travel blog, so I will not go further into the tourism information.

This year we had decided, with our PSE, to have a lighter Microsoft agenda, and to be able to attend more sessions and impromptu meetings. I have to say that it was a wise choice. It allowed us to make new connections, to network quietly and to enjoy the Expo and the other partners. Note that I found it way easier to network this time, as our company was better known in the ecosystem, and we also had a better knowledge of the various people, names and acronyms used throughout Microsoft.

This year I was able to attend several sessions, with different format : roundtables, breakout, demo theater, workshop and of course keynotes. The content was really good, though it is definitely not a technical event.

The best way to have a technical discussion is to go to the Microsoft pods with a specific subject in mind and ask for an expert on that matter. Also these pods provide good help and advice on how to build or develop your business along the current track or toward a brand new scope (yes GDPR was a recurrent topic, I’ll write separately about that later on).

I have met many amazing partners and vendors, through the social events, or on their booths and we have started to build new relationships that will hopefully help develop all our business and knowledge.

Once again, it is an event where you have to be prepared, and be prepared to change your plans.

First you need to have an idea of your goal beforehand. Do you want to find new partners within the ecosystem? Would you rather gian some traction or visibility in that ecosystem, both from Microsoft and from the other partners? Are you open to new business opportunities? Are you here to listen to the keynote and get a feeling of what is coming for the near future?

Then, you need to build your agenda around that goal : sessions, meetings, events etc. But do remember to leave some room to be able to continue a discussion with an unexpected partner, or be ready to not attend a session live and see the recording, because something else popped up.

And mostly, have fun 🙂

My journey to the cloud

I may have skimmed that subject a few times before, but as I get to the end of the (Microsoft) year, and begin a new one, it feels right to reflect for a while on what got me where I am now.

The short version is : I got enough of cabling, servers, storage and operating systems, and wanted to move to something else, however related. Okay, that is VERY short. Allow me to develop that further.

I started working in IT about 15 years ago. I did my duties in user support, moved to network engineering and implementation. At the same time, I discovered the wonderful world of Microsoft training and certification, and got my first cert around 2003, quickly followed by an MCSE (yes, on Windows 2000!).

I switched back and forth between networking and systems engineering for several customers. I collected some knowledge along the way, mainly about hardware installation, cabling, storage and servers, but also about virtualization, networking, SAN. I continued my cert trip in parallel, maintaining my MCSE up to Windows 2016 and Azure. I also collected a few other certs : ITIL, Redhat RHCE (6 & 7), Vmware VCP & VCAP-DCD, Prince2 etc. I will say more about certification in a later article, keep in touch!

To complete the brush-up, I tried my hand at project management, as well as people management.

Let’s get to the point where it gets interesting. First time I heard about public cloud was at Tech-Ed Europe, probably in 2010. It was mostly limited to SQL server databases with many limitations. It was not really a hit for me. The subject kept reappearing : public cloud, private cloud, elastic computing, you’ve heard the talk.

There were actually two triggers to my “Frederi, meet Cloud” moment.

The first one was rather a long term evolution of my area of interest. After years spent working with the same company, and on the same software, I got to the point where I could understand the business side of my actions and responsibilities for our customers. It was a slow shift to a more end-user/application centric approach. This is where I try to push today : the major focus and metric is the end-user. If this user is not happy about his experience, then we (the whole team behind the software, from IT infrastructure to developers and designers) have failed. This is why I tend to ask the question early in the discussions : how is the application used? By who?

The second trigger was more of a “a-ha” moment, specifically about public cloud. In a previous job, I was in an outsourcing team, focused on infrastructure. We had a whole Services department, whose job was to design build and deliver custom software. We almost never had a project in common. Until once we had a developer on the phone, and we had the most common conversation  between dev and ops :

Dev : “we have built a php application for that customer, and he wants to know if we can host and operate it, and what the cost would be”

Ops (me) : “OK, tell me your exact need : OS, VM size, which web server, which version, how much disk space, a public IP etc?”

Dev : “I do not know that”

Ops : “in that case, I cannot give you an estimate. We can operate, but we need to know what”

Follows a few days of emails trying to get those details ironed out and try to write a proposal. Two weeks later, we had the same dev on the phone : “Drop it, the customer has already deployed in Azure by himself”.

That is when I realized that we, ops and infra, could not stay on the defense line and ask for what we knew best. We had to ask about the application itself, and we had to get into that “Azure” stuff.

And that’s how I ended up in Azure, and mostly PaaS oriented 😉

Choosing between IaaS, PaaS, SaaS (and something else?)

 

I know, there are tons of materials and training that will explain you how to select between SaaS and custom software.

I’ll summarize their usual points, but I wanted to add some details on how you might have to look at the full scope of cloud services : from Iaas, through PaaS, to Saas, and a detour through containers.

 

First the usual discussion, that have seen unfurl dozens of times : why choose SaaS over a custom/on-premises solution? You know the drill, right?

On one side, you have full control and can customize the solution. This means the software will be tailored to your exact needs, and you will control exactly what is done with it, how is updated, where data is stored, accessed, replicated, backed-up etc. You will know the exact setup of the deployment, which layer is connected to which other layer, how, where traffic goes, how each layer is protected, and replicated. You will handle failover, high-availability etc. In a few words : you will be the master in your own kingdom. Problem with that path : you are, mostly, on your own. All of these domains I just listed are your responsibility, and you have to have knowledge and skills to handle those. You might need to expand those skills to cover 24*7. You’ll need a strong IT team, in addition to a trained software team.

On the other side, you have SaaS : bright new, quick and easy. You set that up in a flash, connect the solution to your other enterprise software, create user accounts and voilà! No administrative overhead, the only skill you have to master is the configuration of the solution. You’ve seen the downside coming : you have absolutely no control over the software, its release cycle, the mechanisms in place to provide high-availability. Sometimes you have some control over your data, but it’s not obvious.

In the end it’s your call to choose the balance you need.

The cloud has integrated the same choices and solutions. You will have to decide whether you want to use IaaS, PaaS or SaaS. The basic triggers are the same, you choose the right balance between control, freedom and responsibility.

Read here a good explanation : https://docs.microsoft.com/fr-fr/azure/app-service-web/choose-web-site-cloud-service-vm

I  would like to add something to that horizon, something spicier, which could probably give you the best of each solution, provided you are ready to learn some new skills. We had the same discussion several times with our customers, revolving around the limitations of Azure App Service for some Java applications, its lack of control, and how moving from that to a full-blown IaaS virtual machines felt like dropping out of the cloud.

Here what we built with some of those customers. We wanted to provide them with the flexibility and ease of use of Azure App Service, tailored to their needs, without adding much IT admin overhead. We had already been running a Kubernetes cluster for our own internal needs for a while, and it was an easy leap to suggest that solution.

Kubernetes is becoming the leader in container orchestration, but you could choose any other solution (DCOS, Swarm etc.)

Here is a short list of the benefits the customer gained in that solution :

  • Flexibility of the deployment and settings of the application, down to every Java VM option
  • Scalability of an enterprise-ready container orchestration, based on a cloud platform that is reliable
  • Ease of deployment : these are containers after all!

 

The only thing you have to keep in mind here is that someone has to learn and master containers and the orchestration layer for those. Kubernetes might not be the most accessible solution here, but it is, in my mind, the most mature and powerful.

 

One last word, for you sceptics who still believe that Microsoft and Open-source are still far from each other : try to make a new build of your software for containers using Visual Studio :

 

https://blogs.msdn.microsoft.com/jcorioland/2016/08/19/build-push-and-run-docker-images-with-visual-studio-team-services/

Voice control and security

I will assume that I am definitely not the first one to write about that, but I feel the need to write anyway.

We saw during a few recent events that our new beloved always listening devices can interpret an ordre form almost anyone (Someone ordered a Whopper? Burger King: OK Google!)

It seems trivial and a bit childish, but when you start integrating many services into a system like that, you may have to think about security.

This goes at different levels : from limiting commands to voice-print recognition.

https://xkcd.com/1807/

The first issue that comes to mind, related to several recent events, is that you may want to include some kind of limitation on your Google, Alexea or whatever voice-activated device you are using. Just to prevent anyone from ordering everything from your favorite shopping website when they are in listening range. This is pretty simple, and you may just set up your system just to load your shopping cart, and wait for your physical confirmation on your phone/laptop.

Second, you may want to rethink the voice activation of many devices. It has been proved that this can be hacked quite easily (http://www.zdnet.com/article/how-to-hack-mobile-devices-using-youtube-videos/). The good thing is that you can limit what you are allowed to do on your phone without providing an unlock code. It depends of your phone, but you should at least check that out.

Then comes the issues that are not really solved, at least on publicly sold devices. There are two that I can think of right now : voice printing, and content securing.

Voice printing would allow your devices to recognize your voice only, so that no one else can use your device. Pretty simple, and some applications are already providing that, in a limited way. I know, it goes a bit against the current flow of speech recognition, which has been improved up to the point where it does not need to be trained any more. If anyone remembers having to go through hundreds of sentences with Dragon Dictate…

Content securing if the other end of the scope : how do you make sure that some content cannot be spoken, from your device, when it is private. “Siri, how much is there on my bank account?” You might not want Siri to tell the amount out loud on the bus, right? I agree, you should not ask the question if you do not want to hear the answer, but still, it might provide an additional security to be able to limit some outputs.

I have been notified that a device has been designed to provide privacy for your conversations and voice commands : HushMe. I have to admit I am a bit puzzled by the device : http://gethushme.com

I am not sure whether it is a real solution, and a viable one 🙂

Sharding your data, and protecting it

I am quite certain that there are many articles, posts and even books already written on that subject.
To be honest, I did not search for any of those. For some reason, I had to figure out sharding almost by myself building a customer design.
So this post will just be my way of walking through the process, and confirm that I can explain it again. If someone finds this useful, I will be happy 🙂

Here is the information I started with. We want to build an application that uses a database. In our case, we chose DocumentDB, but the technology itself is irrelevant. The pain point was that we wanted to be able to expand the application worldwide, but also to keep a single data set for all the users, wherever they were living, connecting from.
That meant finding a way of having a local copy of the data, writable, in every location we needed.

Having a readable replica of a database is quite standard. You may even be able to get multiple replicas of this kind.
Having a writable replica is not very standard, and certainly not a simple operation to setup.
Having multiple writable replicas… let’s say that even with reading the official guide from Microsoft (https://docs.microsoft.com/fr-fr/azure/cosmos-db/multi-region-writers) it took us a while to fully understand.

As I said, we chose to use DocumentDB, which already provides the creation a readable replica with a few clicks.
This is not enough, as we need to have a locally writable database. But we also need to be able to read data that is written from the other locations. What we can start with is to create a multiple ways replica set.
We could have a writable database in our three locations, with a readable copy in each of the other two regions :

And that is where you have to realize that your database design is over.
Have a closer look at that design. Very close look. And think about our prerequisites : we need a locally writable database, check. We need to read data written from the other locations, check. We do not need to solve the last step with database mechanisms.

The final step is made in the application itself. The app needs to write into its local database to maximise performance, and limit data transfer costs between geographically distant regions. This data will then be replicated, with a small delay, to the other regions. And when the application needs to read data, it will access all of the three sets it has access to in its region, and consolidate the data from all three sets into a single view.

And there it is. Tell me now whether it was me that was a bit thick to not understand that from the Microsoft guide at first read, or please someone tell me that I am not alone in having struggled a bit with the design!

Note : this issue, at least for DocumentDB on Azure has been since solved by the introduction of CosmosDB, which provides multiple writable replicas of a database, a click away.