Today I worked with UIKit Dynamics for implementing some physical based UI effects. It's the first time I use it, I was really impressived how easy it is to bring awesome physical UI effects in your app. Usually physical engines only available in games, but it's really cool that Apple brings this kind of technology to iOS environment just for building UI.
While I was using it, there is one thing that confused me. That's UIPushBehavior's instantaneous mode. For UIPushBehavior's another mode continuous, it says very clearly in the document
A continuous force vector with a magnitude of 1.0, applied to a 100 point x 100 point view whose density value is 1.0, results in view acceleration of 100 points / second² in the direction of the vector; this value is also known as the UIKit Newton.
1 magnitude adds around 100 points / second velocity on a 100 * 100 mass object
To prove what he said is true, I also wrote a simple program to test it out. Turns out it's the same number. And the way it works is, the velocity will be added after the first frame processed by the UIDynamicAnimator.
With these numbers in mind, it's not hard to understand how it works now. I thought we can see the instantaneous mode UIPushBehavior.magnitude as momentum. Given the formula
p = mv
The momentum instantaneous mode UIPushBehavior provides is
Recently, I am working on an iOS project that has UICollectionView in it. For updating items in the collection, I wrote code like this
Basically, whenever the data source updates the items, it runs this piece of code to insert or delete items in the UICollectionView. The code looks pretty straightforward, but sometimes it crashes. The exception looks like this
Invalid update: invalid number of items in section 0. The number of items contained in an existing section after the update (1) must be equal to the number of items contained in that section before the update (1), plus or minus the number of items inserted or deleted from that section (1 inserted, 0 deleted) and plus or minus the number of items moved into or out of that section (0 moved in, 0 moved out).
Really odd right? I checked the number in data source, it's correct. I thought it could be bug, then I googled around and found this on stackoverflow. It said that there is a bug in UICollectionView, to workaround that, you need to call reloadData when it's empty, roughly like this:
Problem still not solved
Although I tried to reloadData() first then insert and delete, I still see crashes on the performBatchUpdates method call. It's not really well-documented how performBatchUpdates works in details, so I decided to try it out to understand how it works.
Then I saw something like
So it turned out performBatchUpdates calls collectionView(_:numberOfItemsInSection:) first before call the given closure to know the item numbers. Next, it calls the closure, and eventually, it calls collectionView(_:numberOfItemsInSection:) again to check the number. And here is where the assertion exception thrown. Say if we insert a new item, and it sees
Okay, before one insert, the total item count is 1, let's update.
Job done, let's check the item count again, wait, WTF? it's still 1? Impossible, I just inserted one item!
When the story comes to this point, I finally understand why it throws that exception. My data source updates its item count first, then performBatchUpdates was called to update the UICollectionView. But the problem is, collectionView(_:numberOfItemsInSection:) returns the post-update item count, it confuses collectionView.performBatchUpdates why the item number is not changed correctly according to the updates we just did.
As if my understanding to how performBatchUpdates is correct, the item count returned by collectionView(_:numberOfItemsInSection:) should be sync with the updates made inside the closure. With this idea in mind, it's easy to solve, just add a property as the item count and update it inside performBatchUpdates closure
and for the collectionView(_:numberOfItemsInSection:), instead of returning items.count, we return the property which is manually maintained by performBatchUpdates closure.
I started GUI programming with Visual Basic 6.0, then I learned how to use Microsoft MFC with C++, a while later I switched to Python and working with wxPython, which is a Python port for wxWidget. It has been more than ten years since I started working on GUI software. Recently I started working on iOS / OS X App projects in Swift, and interestingly I found that the essentials of building GUI apps are not changing too much, so are the problems I've been seeing so far. Despite we are still facing the same problems for developing GUI app, the good thing about software technology is that the solutions always improve over time, there is always new things to learn.
At very beginning, people builds GUI apps without any proper design. Overtime, new features added, bugs fixed, the system becomes an unmaintainable mess eventually. Then, design patterns was introduced, people started to write GUI apps with design patterns. In MFC era, there is a pattern called Document/View Architecture, it divides the application into two parts, Document is the one for business logic and data, and View is the part for presenting the Document. This pattern is actually a variant of MVC design pattern, but just the Document is actually the Model, and View and Controller were combined to be View. Since then MVC was widely known and used, the idea is to make Model the component in charge of manipulating data and business logic, Controller is for user events handling, and View is for presenting the Model.
With MVC architecture, ideally Model knows nothing about GUI implementations, which makes it
Say if you develop a system for accounting with MVC architecture and MFC as the GUI library. One day you decided to make a port of your system to Mac OSX, as if the Model knows nothing about MFC implementation details, all you need to do is to rewrite views and controllers in Objective C for OSX (or you can use Swift as it's a better option nowadays).
The problem with MVC
Although MVC solves some problems, there are still issues
Model changes bring huge impacts on View and Controller
As View and Controller have plenty conections to Model, as long as you modify the Model, you will need to modify both View and Controller when they happen to use the part of Model you modified.
View and Controller are not portable
As View and Controller need to be deeply integrated with the GUI library, which makes it hard to make it portable.
Responsibility of View and Controller is vague
One big problem of MVC is that the responsability of View and Controller is very vegue. View sounds like to be a component for presenting the Model only, but as in many GUI frameworks, UI elements usually also receives user interactions, so it can be handled in View, but usually Controller should take care of the user inputs. And for the Controller, it updates UI elements when Model data updates, so it's actually presenting the Model? Since if Controller is updating UI elements, what does View do besides receiving user inputs? You can make the View transform data from Model and set it to the real UI elements, but Controller can also do that.
There are variants of MVC implementations out there, and interesting, they are similar but different in details. Some says Controller should update Model only, View should observe the Model, and some says Controller should update View, and also update Model, and View knows nothing about the model. I think this is a good evidence that the responsability of View and Controller is vegue, and given a real GUI framework, it's very easy to mess View and Controller up altogether.
Hard to test View and Controller
A very common GUI programming problem to deal with is how to do automatic testing for the app you build. A traditional way to do it is to simulate user input events, such as mouse clicks, keyboard strokes. The problem of this approach is - it's really really hard to test. In many cases, maybe due to UI animations or other weird UI features, your tests are likely to broken not because it's not correctly written. Also, it's very hard to mock with some UI relative objects since they are all implementation details. Moreover, when you write tests from UI interaction perspective, it also implies they are bound to a specific UI environment.
A better solution - MVVM (Model View View-Model)
To address the issues, MVVM was introduced. View-Model is a statful data layer for presenting the underlaying data Model, also provide operation for underlying data model, and View only translates and reflects state and data from View-Model.
As couplings were reduced down to
View to View-Model
View-Model to Model
View-Model should also know nothing about UI implementation details, which make it
Absorbing change impacts from Model
The unsolved problem - how to deal with data binding
Unlike traditional web applications, GUI applications are very dynamic, say if you have a view presenting current temperature in real-time, then the temperature number could be updated in any time. A very primitive solution is to have a callback function property in the data model
Then you can write code like this to keep posted for the realtime temperature data
What if you need to update another UI element in another controller?
Then you need to make the callback an array.
What if you want to cancel the subscription?
Then you need to manually remove the added callback from the array.
What about error handling? What if we have network connectivity issue, how can we update the UI to let user knows it?
Then you will need to add a second callback for error.
What about a new property to be updated?
Well, then you need another callback array and error callback error for it.
Another serious problem with callback functions, it's when ViewController destroyed, if you don't unsubscribe the callback from the data model properly, the unowned self hoding by the callback closure might still get used later, and will end up in crashing your app. To address that, you will need to cancel the subscription manually. Very soon, there will be tons of callback functions to take care, trust me, it will be a nightmare.
A better solution - Observer pattern
Since we only want to be notified when a certain event happens, and we don't want the data model to know anything about GUI client, a better approach is to use Observer design pattern. For the real-time temperature example, and say we also added wind speed, the code could be modified like this
Then, we can subscribe to the event as much as we want
and to cancel the subscription to the Subject at anytime, all you need to do is to call the cancel method of returned Subscription object
And if you want to deal with error, you can add a Subject for error like
But I bet you start feeling awkward already, error we are dealing with only has something to do with the temperatureSubject, why not to combine them altogether?
Think about this
Then you can also subscribe error from the same subject
Deferred (Promise or Future) for async operations
In fact, an observer pattern with both data and error callback is not a new idea. When I was working with Twisted (An async networking library in Python), there is a class called Deferred, you can add the callback and errback to the object, and when the async operation finished, they will get called in a certain manner. For example
As all async operations returns Deferred object, there is a standard way to deal with them, which makes it easy to provide common useful functions to manipulate them. For example, you can provide a retry function to retry an async function N times without modifying the code. Like this
Although it eases async operation headache, it was designed for one time async operation rather than GUI, the needs are pretty different. For GUI, we want to keep monitoring changes of a subject instead of fire the request one time and get a result only.
Functional Reactive Programming with ReactiveCocoa
Given the problem and the solutions we had so far, you may ask, why not just combine these two paradigms altogether? Luckly, we don't need to build this by ourselve, there are solutions available already, it's called FRP (Functional Reactive Programming). It basically combines Observer pattern with Deferred and plus functional programming, it solves problems not just async operation, but also for GUI data updating and binding with view. There are different libraries for FRP in Swift, the most popular ones are ReactiveCocoa and RxSwift.
I like ReactiveCocoa more than RxSwift, as it has different types Signal for emiting events and SignalProducer for making Signal (I will introduce them later), and according to Zen of Python
Explicit is better than implicit.
MVVM with ReactiveCocoa
For adopting MVVM, either View to View-Model or View-Model to Model should only knows the other party in forwarding direction, hence, to notify the bakward direction party, a good data binding mechanism is definitely inevitable, and it's where ReactiveCocoa kicks in.
Although you can build your own data binding mechanisms like observer pattern or this SwiftBond described in this article, I don't think it's a good idea as you will probably end up with something pretty similar, which is in fact rebuilding a wheel.
Also, using Reactive approach not just solves the data binding issue, as modern App usually talks to server via API, we also need to deal with async operations. The Reactive solution comes with
Stable solution that's widely used and well tested for years
Integrated solution for not just data-binding but also async operations
Remember the retry example we mentioned before, it's also a build-in function for ReactiveCocoa, so you can retry any async operation you want without modifying a single line of code, just do it like
Besides that, say if you want to delay the result a little bit from a queue, not a problem, just call
And like what I said there are also other resources you can use, for example if you really like to use Alamofire with ReactiveCocoa 4, you can use the Alamofire integration for ReactiveCocoa I built - ReactiveAlamofire.
To be continued - a missing guide for MVVC with ReactiveX
From MVC came to MVVM with ReactiveX approach, this is the best solution I've ever learned so far. However, it's not really widely used, I think that's because Reactive code looks frightening at very first glance without spending some effort understanding why to use it and how it works. And there is also missing a practical guide shows you how things work. This is why I am writing this. The second part of this article will focus on how to use it.
Bugbuzz is an online debugger, one of my pet projects. What I wanted to provide is kind of really easy-to-use debugging experience, I envisioned the debugging with it should be like dropping one line as what you usuallly do with ipdb or pdb.
You can do it anywhere, no matter it's on your Macbook or it's on server. Then there comes a fancy Ember.js based online debugging UI.
To make this happen, instead of providing debugging service on your local machine, a debugging API service is needed.
The architecture is pretty simple, Python debugger library sends source code and local variables with all other necessary information to the Bugbuzz API server, then Bugbuzz API server will notify the Ember.js dashboard via PubNub. When user clicks buttons like next line / next step, the dashboard calls to the API server, then the API server publishes these commands vai PubNub to the debugging Python program. Upon receiving the commands, the debugging library executes then sends source code and local variables to the API server again.
No, you should not trust me.
Although Bugbuzz does provide easy-to-use service, it still concerns some developers, as all source code and local variables will be passed to the server. You may ask
Can I trust you with my source code and debugging data?
My answer is
No, you should not trust me.
In fact, this not only concerns you but also concerns me, I don't want to have any chance to read your source code and the debugging data either. It feels like a paradox to me, I want to provide you an easy-to-use service, but I want to know nothing about your data. So how do you solve this problem?
The answer is encryption!
I have this concept for a long while, I called it Anonymous Computing. The idea is to provide service without knowing senstive data while processing. As a service provider, it's really hard to do as the less I know, the less I can provide. But if one can manage to do so, users don't need to trust the service provider, they can trust the encryption mechanism.
One approach to do it is to encrypt the data in the source, pass it to the client via server, then decrypt the data in the client side. As long as the server doesn't know the secret key, the data shall remain unknown to the server.
In the past, this is almost impossible to do with web, as
Web server renders the web page, i.e.
server will know your data anyway
Functionalities of browser is pretty limited
Fortunately, it's 2015 now, browser is not merely a web-page viewer anymore, it's an application platform. Not just in terms of functionality, also the performance has been enhanced over time. Even better, there are booming web technology communities all around in this era, you can pick any one you like and start crafting awesome web app without worrying low level details, enjoy the beautiful view on shoulders of giants.
How does it work?
For Bugbuzz, I use Embjer.js for developing the dashboard app. It works like this
Instead of sending plain text source code and debugging information, when a debugging session starts, the library creates a secret key, then encrypts source code and debugging information with the secret key and pass it to the server. All Bugbuzz API server can see is encrypted data. To allow the Ember.js dashboard to decrypt the data, the secret key will be passed to the dashboard as part of hash in URL.
It's Ember.js nature to use hash style URL, by doing this, the web server cannot see the secret key, as the browser will only send URL part to the server. Visiting a debugging session without the secret key, you can only see it asks you to provide access key
Encryption with Ember.js in action
The encryption algorithm we use here is AES. I am not teaching you cryptography here, we will focus on how encryption works with Ember.js only, if you are interested it cryptography, you can read CRYPTO101.
To understand how encryption works with Ember.js model, let's see a very simple file model
As you can see the model has content property, it supposes to be encrypted. I will suggest you use base64 to encode it. Given an example
Let's decode it and see what it looks like
Looks like completely nonsense huh? Well, that's the point of encryption :P
To decrypt it, you need the access key. It will be passed in as a queryParam for the controller like we mentioned, you can define your access_key parameter like this
The secret key usually needs to be passed as a part of URL, you can also encode it in base64, but remember to use URL-safe base64 encoding. Upon receiving that access key, you can validate it and set it to the model like this
I also clear the access_key parameter then call self.transitionToRoute('session', session), that's because I don't like to leave the access key in URL.
Since even with a wrong access key, the decryption still works, but just the output is garbage. It's hard to tell whether is the output correctly decrypted or not sometimes. In this case, you can provide a validation_code as plain text in the data along with a encrypted validation_code. So that you can decrypt it and see if the validation code matches, like this
If the access key is not valid, you can prompt user to input correct one. With the access key properly set to the debugging session model, we can now write this:
It reads accessKey from session model and decrypt it. So in the template, you can access file.source_code
I feel we actually only unleashed a minor portion of cryptography power with modern browser technologies. I envision in the future, more interesting anonymous computing browsed-based application will be introudced, by leveraging asymmetric key encryption, blind signature, Bitcoin block chain and all awesome technologies in cryptography world. Bugbuzz is just a very simple example shows how we can build accessible but also trustable service with Ember.js + encryption.
AWS Elastic Beanstalk is a PaaS service for web application hosting pretty much like Heroku, but instead of designed to be a PaaS at very beginning, it was actually built by combining different AWS services together. Since Elastic Beanstalk is a composition of different AWS services, it's an open box, you can tune different AWS service components in the system you're already familiar with, like load balancer, VPC, RDS and so and so on, you can also login the provisioned EC2 instances in the cluster and do whatever you want. However, as the all systems were not designed only for Elastic Beanstalk, a drawback there is - the system is a little bit too complex. Sometimes when you adjust the configuration, it takes a while to take effect, and sometimes there are some glitchs during the deployment process. Despite these minor issues, it's still a great platform, if you build a higly scalable and higly available cluster on your own, it would be way more time consuming, and you will probably run into more problems Elastic Beanstalk already solved for you.
Overview of Elastic Beanstalk
Elastic Beanstalk supports many popular environments, Python, Java, Ruby and etc. The way it works is pretty simple, you upload a zip file which contains the application code in certain predefined file structure, and that's it, AWS runs it for you. For example, to use Python, you need to provide a requiements.txt file in the root folder. The structure in the application zip file would look like this
In Elastic Beanstalk, this file is called a Version of the application. You can upload several versions to the application. Then, to deploy the application, you need to create an entity called Environment. An environment is actually a cluster running a specific verion of application with certain adjustable configuration. An environment may look like this
Load Balancer: YES
Min instances number: 3
Max instances number: 5
And for the same application, you can have mutiple environments, like this
It's pretty neat, you can run different versions of your application in different stack with different configuration. This makes testing much easier, you can simply create a new environment, run some tests against it, tear it down once the test is done. You can also launch a new production environment, make sure it is working then point the DNS record from the old environment to the new one.
Deploy an application as Docker image steps-by-steps
Although the Elastic Beanstalk system itself is very complex, using it is not so difficult. However, it seems there is no an obvious walkthrough guide for setting things up. The offical AWS document is really not so readable. And for Docker, it's still pretty new technology, you can find very few articles about how to run Docker with Ealstic Beanstalk out there. So, here I write a guide about running a simple application as Docker image with Elastic Beanstalk steps-by-steps.
Install Elastic Beanstalk commandline tool
Before we get started, you need to install Elastic Beanstalk commandline tool, it's written in Python, so you need to have pip installed in your system, here you run
Then, remember to expose your AWS credentials in the bash environment
Get started with your project and application
Okay, now, let's get started with our demo project.
Next, init our Elastic Beanstalk app.
You have created an application now, to see it in the AWS dashboard, you can type
And you should be able to see our docker-eb-demo application there.
Actually, you can also create the application first in the dashboard, then use eb init command and select the existing application, either way it creates a config file at .elasticbeanstalk/config.yml.
Let's build a simple Flask app
We are here just to demonstrate how to run a Docker application with Elastic Beanstalk, so no need to build a complex system, just a simple Flask app.
What it does is very simple, it prints the WSGI environment dict of request, hence, we call it echoapp. You may notice that we read PRINT_INDENT as the print indent, and many other variables for running the HTTP server. As long as either Docker or Elastic Beanstalk use environment variables for application configuration, to make your application configurable, remember always to read application settings from environment variables instead of configuration files.
Build the docker image
To build the docker image, I like to use git archive make a snapshot of the project and add it into container by ADD command. By doing that, I won't build an image contains some development modification accidently. However, since Dockerfile is not good at doing some preparing steps before building the image, so I like to use a Makefile for doing that. Here you go
and for the Dockerfile
We use phusion/baseimage as the base image. It's basically a modified version of Ubuntu, to make it suitable for running inside Docker container. It provides runit service daemon, so we simply install the app and create the service at /etc/service/echoapp/run.
With these files, here you can run
to build the Docker image. Then you can test it by running
and use docker ps to see the mapped port
and curl to the server
Notice: if you are using boot2docker under OSX environment, you should run boot2docker ip to see what's the IP address of the virtual machine and connect to it instead of 0.0.0.0.
Upload your application to Docker registry
There are two ways to run Docker apps with Elastic Beanstalk, one is to let them build the Dockerfile for you everytime you deploy an application. I don't really like this approach, since the value of Docker is that you can build your software as a solid unit then test it and ship it anywhere. When you build the Docker image on the server every time you deploy, then it's meanlingless to use it. So I would prefer another approach. The other way for running Docker is to create a Dockerrun.aws.json file in the root folder of your project. In that file, you indicate where your Docker image can be pulled from. Here is the JSON file
As you can see we indicate the Docker image name is victorlin/echoapp, Elastic Beanstalk will then pull and run it for you. If your Docker image in Docker hub is a private one, you will need to provide Authentication information, which points to an S3 file contains .dockercfg file (the file can be generated by docker login command at your home directory). If you provide the S3 .dockercfg file, remember to add proper permissions to the EC2 instance profile for running Elastic Beanstalk so that it can be accessed. And yes, of course, in the previous step, we didn't upload it to Docker hub. You can do it by
Or if you prefer to do it manually, you can also use docker push command to do that.
The Ports and Logging indicate which port your docker image exposes, and the path to logging files. Elastic Beanstalk will redirect traffic to the port and tail the logging files in that folder for you.
Create our development environment
Okay, we have our Docker image ready to be deployed now. To deploy it, we need to create an environment first. Here you run
It takes a while before the environment gets ready. To create an environment, you can also use AWS dashboard, then run eb use <name of environment> to bind current git branch with the created environment. To see your created environment, you can type eb console and see it in the dashboard
If you see the environment is red, or there was some errors when running eb create, you can run
to see whats going on there. You can also visit the application in browser by typing
to see the status of environment, type
Deploy a new version
After you do some modifications to your app, you do a git commit, build a new Docker image and push it to the Docker hub. Then you can run
to deploy the new image to all servers.
For production usage, I would suggest you pin the version number in Dockerrun.aws.json file. For example, the image name should be something like this
In that way, when you run eb deploy, it takes a snapshot of your current git commit and upload it as a Version. When it get deployed, the specific version of Docker image can then be pulled and installed. If you don't specify the tag, then latest image will be pulled and installed. That's not a good idea for production environment since you may want to roll back to the previous version if the new one is broken.
Set the environment variable
To see current environment variables, it's easy, simply type
And to update it, for example, we want to change PRINT_INDENT to 4 and enable DEBUG, here you type
That's it. It's actually not that hard to run your Docker image with Elastic Beanstalk, just trivials. Once you get familiar with it, that's a piece of cake. The whole demo project can be found here: docker-eb-demo. Hope you enjoy running Docker with Elastic Beanstalk as I do :)
There are many deployment tools, such as Puppet, Chef and Salt Stack, most of them are all pull-based. Which means, when you deploy to a machine, the provisioning code will be downloaded to the target machine and run locally. Unlike many others, Ansible is a push-based deployment tool, instead of pulling code, it pushes SSH commands to the target machine. It's great to have push-based approach in many situations. For example, you don't need to install Ansible runtimes on the target machine, you can simply provision it. However, there are also shortcomings of this approach. Say if you want to provision EC2 instances in an AWS auto-scalling group, you don't know when a new instance will be launched, and when it happens, it needs to be provisioned immediately. In this case, Ansible's pushing approach is not that useful, since you need to provision the target machine on demand.
There are many ways to solve that problem, namely, to run Ansible provisioning code in a pulling manner.
One obvious approach is to use ansible-pull, it's an Ansible commandline tool clones your Ansible git repo for you, and run them locally. It works, however, there are some drawbacks. First thing is the dependencies issue, to run ansible-pull on the target machine, you will need to install Ansible runtimes on the machine first, if you are running an Ansible playbook depends on newer version of Ansible, then you need to find a way to upgrade the runtimes. Another problem is the provisioning code is installed via git or other version control system, it's hard to verify the integrity of those playbooks, and the code cannot be shipped as a single file.
Ansible Tower is the official commercial tool for managing and running Ansible. There is an interesting feature it provides, which is so-called "phone home". It works like this, when a new machine is launched, it makes an HTTP request to the Ansible Tower server, just like calling home and says
hey! I'm ready, please provision me
Then the server will run ansible-playbook against the machine. It works, but one problem we see there is, when your Ansible Tower can SSH into different machines and run sudo commands, it usually means you need to install your SSH private key in the tower server, and also need to preinstall the corresponding public key to all other machines. Allowing one machine to be able to SSH into all other machines makes me feels uncomfortable, it's like to put all eggs in single bucket. Although you can actually set pass-phase for your private key on the tower server, since your machines in AWS auto-scalling group need to be provisioned at anytime, so that you cannot encrypt your private key with pass-phase.
An interesting approach - Docker
With the requirements in mind
No runtime dependencies issue
Provision code can be shipped as a file
Provision code integrity can be verified (signed)
an interesting idea came to my mind. Why I don't simply put Ansible playbooks into a docker container, and ship the image to the target machine, then run the Ansible playbooks from inside the docker image and SSH against the host? With a docker image, I don't need to worry about Ansible dependencies issue, including Ansible runtimes themselve and many other necessary runtimes, such as boto, can all be installed into the docker image. And the docker image, can be shipped as a single file, we can sign the file and verify it on the target machine to ensure its integrity.
A simple example
I wronte a simple project to demonstrate this idea, the project can be found on github. It's actually pretty easy, for the Dockerfile, we install ansible dependencies and install necessary roles. We also copy our own site.yml into the docker image.
You can build the ansible image with
Then, to run it, before you do it, you need to create a host file, and insert the private IP address of your host machine. Like this
You should notice that since the Ansible is executed inside the docker container, so localhost simply doesn't work. You need to specify an address which is accessable from the Docker container network. To allow SSH connection from the docker container, you also need to provide a temporary SSH public key installed in the host machine, and the private key for the container to connect to the host. Here is pretty much the command you run
We map our hosts file to the docker container at /tmp/hosts, and the SSH private key at /tmp/insecure_private_key, then we can use it in the ansible-playbook command arguments. That's it!
It's so powerful to combine Ansible and Docker
It's so powerful to combine Ansible and Docker together, as you can see, the software for provisioning machines now is packed as a Docker image, so that you can run it anywhere. It's a solid unit, you can sign it, verify it, tag it, ship it, share it and test it. Everything is installed in the container, you don't need to worry about missing some plugins or roles on the target machine.
The only drawback I can think of is you need to install Docker on the target machine before you can use this approach, but it's not a problem since Docker gets more and more popular, you can preinstalled it in your AMI. And the only thing I am not happy with docker is the image registry system, it's very slow to push or pull an image if you have many layers in it and the size is big. Actually I have an idea about building a way more better docker registry, hopefully I have time to do it.
I am already using this approach for provisioning machines in our production environment, and it works like a charm so far. I am looking forward to see people using this technique to pack deployment code into docker image, imagine this:
and boom! you have a fully functional Swift cluster in AWS EC2 now, isn't that awesome?
Docker is something really hot recently. It allows you to run your software with linux container easily. It's actually kind of OS level isolation rather than hardware simulation nor kernel simulation. So you won't have too much performance penalty but still have pretty nice virtual machine features. I really like the analog used by Docker community, shipping software should be easier and Docker serves as just like the standard container in shipping industry.
Building docker images is not hard, but ...
Although docker provides an easy way to deliver and run your software in linux container, there is still no an obvious and easy way to build a docker image for big projects. For building a large and complex project docker image, you would probably need to
Clone your private software repo in to build folder
Ensure base images are built before your project image
Generate some files dynamiclly, such as current git commit revision
Upload image with your credentials
With Dockerfile, you can only have static steps for building the image. Obviously, it was not designed for doing any of these listed above. And since docker uses a kind of layering file system, you probably don't want to put your Github credentials into the container and pull the repo inside it, because it works pretty similar to git commits, once you commit, then it's hard to remove it from the history. So you defititely want to do these things outside the container and then put them together.
My first solution - Crane
With these requirements in mind, I actually feel it's pretty similar to the experience I had with Omnibus - a tool for packing your software into a standalone dep package. So I built a simple tool in Python for building docker images, named Crane. It allows you to define steps for building the image, it also provides template generating with jinja2.
The final solution - ansible
Crane was working fine, but I actually don't like to make a new wheel and maintain it if there is already an obvious better solution. After I played ansible for a while, I realized it is actually a way better solution for building docker images. So, what is ansible you may ask, well, it's yet another deployment tool, like Puppet, Chef or SaltStack.
Wait, what? Why you are using a deployment tool for building docker image? It may sound odd to you at very begining. But ansible is not actually just yet another deployment tool. Its design is pretty different from its predecessors. It uses SSH for pushing commands to target marchines, other tools are all pulling based. It also provides many modules for different operations, including creating instances in EC2 or other cloud computing providers. Most importantly, it is able to do orchestration easily.
Of course it meets requirements we mentioned before
Clone git repo? Check.
Build base image? Check.
Generate dynamic file? Check.
Generate templates? Check.
Upload images? Check.
Moreover, with ansible, you can launch an EC2 instance, build the image inside it, and run a series of tests before you publish the image. Or you can simply build the image in your vagrant machine or in the local machine. It makes building software extremely flexible, since you can run the building process anywhere you want as long as they can be pushed as commands via SSH, you can also provision the whole building environment, or even drive a fleet in cloud for building, that's pretty awesome huh, isn't it?
Show me the code
Okay, enough of talking, let's see the code. The tasks/main.yml looks like this
and the playbook looks like this
So, to build with vagrant, you can run something like this
A tool for deployment but also amazing for building software
Although ansible was not designed for building software, it doesn't necessary mean you cannot do not it. And surprisingly, it does so well in building software. With its machine provisioning and orchestration capability, you can integrate building and deployment togehter easily. The building environment itself can also be provisioned before building the software. Cloud computing resource can also be liveraged. I feel there are actually lots more interesting things can be done with ansible. Looking forward to see how people use it not just for deployment but also for building software :P
It's always an hateful job to correct PEP8 warnings manually.
I bet you don't like this either. Today I cannot take it anymore. I was wondering, why I should do this thing machine should do? So I seek solutions on the Internet, and I found an article looks helpful - Syntax+pep8 checking before committing in git. The basic idea is to add a pre-commit hook script to git for checking PEP8 syntax before commit. By doing that, you cannot commit code with PEP8 warnings anymore, when you do, you see errors like this
Which is great, but still, you need to correct these syntax issues manually. I thought there must be something can do these boring job for you. And yes, I found autopep8. I can use autopep8 in pre-commit hook to correct PEP8 issues for me. However, I don't think it is a good idea to let code formatting tool modifies your code silently when committing. I want to know what is modified by the tool. So here comes another solution, I use post-commit hook instead to compare the latest commit with previous commit:
All you need to do is install autopep8
and put the post-commit script at .git/hooks/post-commit. In this way, once I do a commit, it will correct PEP8 issues for me. I can review what is modified there, and make another PEP8 correction commit. With this, you can eventually enjoy coding rather than wasting time in removing trailing whitespace everywhere :D
I really love and enjoy programming in Python, as it is one of my favorite programming languages, I also love to recommend Python to other developers or people who is about to learn their very first programming language. However, there will always be an awkward thing - when they ask you which of Python 2 or Python 3 to use. My answer could be:
Ah..., you should lean Python 2 first, major libraries support Python 2 only. It will takes about one or two years before these third-party resource catch up Python 3.
Sadly, five years has been passed since Python 3 was released, but there is still only 3% of Python developers are using version 3. If people ask me the same question now, I really don't know how to answer. And I even start thinking is this the end of the programming language I like so much? These days, I read a lots of articles talking about Python 2 and Python 3 crisis:
There are many different opinions, some of them said you should kick people harder, so that they will start using Python 3. Some of them said you should build Python 3 even better, so people will start using it. For me? I don't believe you can kick people harder to make them jump to the moon, I also don't believe people can jump to the moon just simply because you put treasure on it and say come and get it if you can, it has been 5 years passed, why we are still not on the moon?
I think the problem there is simple, the goal is too far, the community is too eager. I recall a story I hear when I was a child.
There was a young person carring all kind of goods on him, a ship is about to leave. He asked an local elder man.
Can I make it in time?
The elder man took a glance on him, and say,
If you walk slowly, take it easy, you can make it.
The young guy didn't take the advice, he ran to the port as fast as he can. Unfortunately, he fell on the road, and all his goods dropped around, he didn't make it in time.
Python community is a little bit like the eager young person in the story. Urgent to build so many fancy advanced features, but what is the point if no body is using it? I think maybe it is the time to slow it down, then we can go far.
Interestingly, there are calls for Python 2.8 in these discussions, and personally, I also believe Python 2.8 could be the missing bridge from Python 2 to Python 3. And if it is necessary, maybe there should be Python 2.9 and Python 2.10. I know it is the nature of a developer to discard old stuff, to eager to build and embrace awesome new widgets. But in real software world, you don't get to awesome suddenly, instead, you keep getting better gradually. So, let's stop blaming anyone and build Python 2.8 :)
When you buy a house, you raise a mortgage. When you buy a car, you raise an auto loan. Maybe you are rich enoguh to clear all the debts at once, but anyway, we all live with debts, more or less. Infact, for software developers, we all live with debts as well. They're so called technical debts. I really like the debt analog, technical debts are similiar to debts in real life. Funny enough, most of people know how real debts work, but technical debts are not well known by developers. However, it makes sense. People live with the idea of debts maybe for thousands years, but there are only few decades history of computer.
That's really nice to have an accurate analog. It allows me to explain things by borrowing some mature experience from financial world. I am not an expert in finance, however, when I see a cash flow diagram, I realized this is exactly the same diagram for explaining technical debts. Let's see what the cash flow diagram looks like when you rase a loan from bank.
As the name implies, it's all about flow of cash. Green arrows above the axis are the income, red arrows below the axis are the cost. When you raise a loan, you have an immediate income at the begining, but it doens't come for free. You need to pay interest to the bank periodically. And eventually, you need to refund the initial debt (not shown in the diagram). There are various different situations, for example, you may can rent the house to others, then you have recurring income, sell the house when the prise raised and refund the mortgage eventually. Nevertheless, we are not teaching finance here, the point here is that we can borrow the diagram for visualizing technical debts.
Raise a technical debt
For software development, I see production as the income, production reduced or extra time spend as the cost. So, how to raise technical debts you may ask, well, the fact is, you don't have to, there are many build-in debts during the software development process.
Let's see an example, say, you are developing a system. At first, there is only one feature in it, everytime you modify the code, you have to check that is this feature working correctly. The code you write is the income, and the time for testing is the cost. Overtime, you fix bugs, you improve the code, you always need to make sure does the feature work correctly. This is actually a very common build-in techincal debt in software development. Not to write automated testing is the debt itself, by doing that, you save some development time, it is an immediate income (production gain), however, you need to pay interest everytime you modify the code. The diagram would look like this
Things could even get worser when there are new features added to the system, you have more features to test each time you modify the code. The diagram for a scale-growing system looks like this
You said, hey, why not just ignore them, I believe they will not be broken so easily. Well, this could be true, however, saving time by not to do test, the cost for testing will become risk, your customers or end-users are going to test those undetected issues for you.
Moreover, when the system came to a really big scale, you may found yourself are always testing for countless functions, and there will also be endless bugs to fix. That's simply the debt is too high, your productivity is all eaten by the interest, you are never going to deiliver the product unless you refund the debt.
For the same case, you have more and more features in the system, but you spend your time on autmoated testing at begining. It just like you refund the debt at very first, this makes the interest in control. When you added a new feature, all you have to do is to write corresponding tests for it. In this way, You can make some real progress.
Source of debts
Unlike real debts, you don't have a sheet tells you how much they are. Technical debts sometimes just aren't obvious. Nevertheless, we know some certain source of debts. Such as
Bad coding style
No automated testing
No proper comments in code
No version control system
Maybe there are other debts not listed above, however, the point is, these debts all have similiar effect - you or team members need to pay the interest when developing on these code. For example, a badly written function, everytime developers read it, they all need extra time to understand it, that is the interest. Interestingly, although you have to pay the interest, not all debts gain you a big boost in development, some debts can be avoided easily. Experienced developers can usually produce code with good style and design.
Debts are not all that bad
So far, we talked like debts are all evil, they are demons, you should never deal with them. But the truh is raising debts can be a good thing sometimes. As raising debts buy you some time, even for real life finance world, raising debts could be a key to success. When a company has no debts, investors actually see the company must be inefficient. So this is about trade off, experienced developers not only produce code with lower debts than inexperienced ones, they also know when to raise debts, how much to raise.
For example, you are running a startup, you even don't know is your product going to work. At this moment, you can do some dirty hack to make things work, refund the technical debts later after you survive.
Nice, I am not the one who pays bill
People love free lunch, it is really nice you don't have to pay the bill, isn't it? Developers also like it. There are many situations that you are not the one paying interest for technical debts. For example, you accept a software development contract, you are pretty sure once you deliever the project, you are never going to see it again. In thise case, many developers just don't care, they are not the one who pays bill, why should they?
This is an actual moral hazard. Funny enough, it also happens in real finance world, like Wall Street bankers, they know taxpayers are going to pay the bill, why should they care risk? Unfortunately, unlike bankers, ignoring moral hazard won't earn you billion dollars. It only earns curse from the next developer. And sometimes, you just have no choice, the deadline is right ahead, all you can say is
Despite the situation you have no choice, sometimes you can raise as much technical debts as possible without worrying about it. For instance, you are writing a run once and throw alway script, then do your best to raise debts.
For software development, it is important to understand technical debts, there is no easy or accurate way to measure them, but you can tell from your experience. My debt analog here may not be 100% precise, but surely it gives you a feeling about it. To build a successful software, you should keep the idea of technical debts in mind, you should also control them rather than letting them control you.