Docker for developers on mac and windows 1. GARETH RUSHGROVE Product Manager, Docker Docker for Developers 2. - Using Docker on your desktop - Building Docker images - Describing applications in code - Using IDE’s with Docker - Graphical tools for creating applications This Talk.
The last time I used xhyve, it kernel panic'ed my mac. Researching this on the xhyve github account 1 showed that it was determined that it's due to a bug with Virtualbox. That is, if you've started a virtual machine since your last reboot with Virtualbox, subsequent starts of xhyve panic.
So, buyer beware, especially if said buyer also uses tools like Vagrant. 1 I've said before that I think the Docker devs have been iterating too fast, favoring features over stability. This development doesn't ease my mind on that point. EDIT: I'd appreciate feedback on downvotes. Has the issue been addressed, but not reflected in the tickets?
Has Docker made changes to xhyve to address the kernel panics? Thanks, this is useful feedback. There are various workarounds in the app to prevent such things, but the purpose of the beta program is to ensure that we catch all the weird permutations that happen when using hardware virt (e.g. The Android emulator).
If anyone sees any host panics ever, we'd like to know about it (beta-feedback@docker.com) and fix it in Docker for Mac and Windows. Fixes range from hypervisor patches to simply doing launch-time detection of CPU state and refusing to run if a dangerous system condition exists. people don't agree with his criticism of stability because it conflicts with VirtualBox Folks are welcome to disagree, but Docker has a history of shipping software which uses a 3rd party feature which breaks, to which they frequently responded 'not our code, talk to someone else': btrfs instability, corrupted volumes due to conflicting devmapper libraries, iptables dropping routes, upgrades orphaning containers, etc. I realize they don't have control over all of the variables, but constantly releasing unstable 3rd party features was not the greatest behavior, and the 'Not My Problem' response to issues is aggravating. All that said, since they're working against their own fork of xhyve, it is a sign that these kinds of issues will be addressed by the Docker team this time, which is a good thing.
If I had a yearly quota on HN for upvotes, I'd use all of them on this. Volume mounting for your code and data: volume data access works correctly, including file change notifications (on Mac inotify now works seamlessly inside containers for volume mounted directories). This enables edit/test cycles for “in container” development. This (filesystem notifications) was one of the major drawbacks for using Docker on Mac for development and a long time prayer to development god before sleep. I managed to get it working with Dinghy but it still felt like a hack. I gotta say I've had about all the VirtualBox I can take in this lifetime.
It's caused me pretty dire file handling problems on 3 projects and only 2 of those had any Docker in them. Thanks for working on this. I have an open bug against docker-compose (docker doesn't do the same thing by itself) where the wrong layers are being used, but only on virtualbox. Hopefully this will solve that problem, as well as how to make my dev and prod database handling more homogenous.
And I can finally turn sendfile back on in my nginx configs without having to special case anything for dev! Is this using the normal VirtualBox 'shared folders' functionality? For Vagrant we had to drop VirtualBox in favour of VMware Fusion because VirtualBox suffered cache corruption almost every day. You would write a file on the host, and the file would be corrupt inside the VM. Last I checked, this bug was still open, but I'm not certain (on my phone right now), but it still makes me wary of using VirtualBox again. Have you dealt with this issue at all?
Edit: Or is this not using any VirtualBox code at all? Thanks for exposing me to ThinApp and the rest.
I took a quick look, these are Microsoft based technologies designed to run Windows apps, however conceptually I don't see much difference. Docker is a containerization standard that relies on various Linux capabilities to isolate application runtimes (or containers if you will). On Mac and Linux it used to be achieved by running a small Linux VM in VirtualBox, but looks like this release has brought xhyve on, which is supposed to have an even smaller foot print. Docker uses LXC containers. In Linux, these aren't VMs and are light weight user-land separations that use things like cgroups and lots of really special kernel modules for security.
Unfortunately, this means Docker only runs on Linux. Not even Linux.special Docker Kernel Linux (all the features they need are in the stock Kernel tree, but it's still a lot of modules). In Windows/Mac, you still need to run in a virtual machine.
Even with this update.you still need to run in a virtual machine. It's not actually running Docker natively. It can't, even on Mac which has a (not really).NIX-sh base.
You have to then use the docker0 network interface to connect to all your docker containers. In Linux, you can just go to localhost.
I think FreeBSD has native Docker support with some custom kernel modules. I'm not sure.I've only looked at the Readme. I haven't tried it. So even in Windows/Mac, all your containers do run in one VM (where as with traditional stuff you mentioned, you'd need a VM for each thing). Docker containers are meant to handle one application (that it runs as root within its container as the init process. With VMs, you'd typically want some type of configuration management (Puppet, Ansible, Chef, etc.) that sets up apps on each VM/server.
With Docker, each app should be its own container and you link the containers together using things like Docker compose or running them on CoreOS or Mesos. In my work with Docker, I'm not sure how I feel.
LXC containers have had a lot of security issues. Right now, Docker doesn't have any blaring security holes and LXC has increased security quite a bit. CoreOS is pretty neat and I wouldn't use docker in production without it or another container manager (the docker command by itself still cannot prune unused images. After a while you get a shit ton of images that just waste space you're not using.
CoreOS prunes these at regular intervals. A docker command to do this is still a Github issue.
Writing one yourself with docker-py is horribly difficult because of image dependencies). Oh and images. Docker uses images to build things up like building blocks.
That's a whole thing I don't want to go into, but look it up. It's actually kind of interesting and allows for base image updates to fix security issues (although you still need to rebuild your containers against the new images. I think.I haven't looked into that yet). I find it lazy in some ways.
I think it's better to build packages (rpms, debs). FPM makes this really easy now. Combine packages with a configuration management solution (haha.yea they all suck. Puppet, Ansible, CFEngine.they're different levels of horrible. Ansible so far has pissed me off the least) and you can have a pretty solid deployment system. In this sense, Docker does kinda make more sense than handling packages.
You throw your containers on CoreOS/Mesos and use Consul for environment variables and you can have a pretty smooth system. I'm trying to actually like Docker. I've only made fun of it in the past, but now I work for a shop that uses it in production. There are no custom kernel modules, everything is in a stock kernel since 3.10 (which not-so-coincidentally is the minimum supported kernel version). Containers are run with whatever user you tell it to run as, the default is root because that's the only guarantee. LXC is also something different. LXC is a set of userland tooling to interact with cgroups and namespaces (which docker used to exec out to).
LXC!= Linux containers (and indeed there isn't really such a thing as a container like there is a zone or a jail on Solaris and BSD respectively, it's made up) Also again, no custom kernel modules on BSD. The glaring security hole in Docker is that it has not designed a solution for keeping secret data necessary to build an image from being in the image at run time. They also haven't solved the general case of keeping transient build data out of the final image either, but that's a broader problem that doesn't necessarily involve security concerns.
For now not a lot of people are concerned about either problem so it's not getting the attention it deserves. But they've been steadily peppered with inquiries about these issues for a year or two now and they still don't have an answer, which is concerning. I believe this is one of the reasons the CoreOS guys wandered off to do their own thing.
Fortunately for us and unfortunately for them, they have the design aesthetics of the Marquis de Sade, and until they start giving even half a thought to ergonomics, Docker is perfectly safe. I think you just proved my point. We're all of us running around with our pants down because we think Docker is taking care of this stuff but it's merely a bunch of features that look like they should be fit for that purpose but aren't. And this is why I am stuck with a separate build and package phase, because I have to have that separation between the data available at build time and what ends up shipped, but even there I'm pretty sure I'm making mistakes, due to some of the design decisions Docker made thinking they were helping but actually made things worse. For instance, there's no really solid mechanism for guaranteeing that none of your secret files end up in your docker image, because they decided that symlinks were forbidden. So I have to maintain a.dockerignore file and I can never really be sure from one build to the next that I haven't screwed it up somehow. Which I will, sooner or later.
I'm always one bad merge away from having to revoke my signing keys. It's a backlash waiting to happen. I'm sorry, things got hectic and I bailed on the discussion. I thought I had a handy link to the bug I was thinking of, but I couldn't find a back-link from the issue I'm watching to the one in docker/docker. I think but am not 100% certain this is the issue I was thinking of, but it seems the most likely, and it was just fixed in 1.10: Some day I'm sure.dockerignore will be solid, but my confidence level isn't high enough yet (it's getting there) to base my trust on. My point was that there are other ways that directory structures and what is visible to COPY could have played out where vigilance is less of a problem.
It's usually immediately obvious if a file you actually needed is missing from a build, but less obvious that a file that you categorically did NOT want to be there is absent. Because the system runs in one of those scenarios and dies conspicuously in the other. How would they 'encrypt' them that wouldn't be trivial to undo? I think people aren't concerned about it because it doesn't make sense to try to put secrets into container images. Whatever you're using to deploy your Docker containers should make those secrets available to the appropriate instances at runtime. This is how Kubernetes handles secrets and provides them.
(For example, what if you have two instances of a service and they need to have different SSL certs? Are you going to maintain two different containers that have different certs?
Or would you have a generic container and mount the appropriate SSL cert as a volume at runtime?). I've actually read that. For context, it's a comment made before the feature was complete.
Said feature, according to the manual, doesn't persist the value, thus is probably suitable to pass a build time secret. From my testing though, as long as you set the build-arg and consume it directly, it doesn't seem to persist. That said, it's super easy to fuck that up if the tool you consume it with then goes on to save the secret somewhere. Thus it's no doubt best to use expiring tokens or keep your build seperate.
Also don't use it to seed a runtime secret unless you treat, that'd force you to treat the image as a secret itself. I linked to that because it cross references to the PR where the build-args feature was added. If they're out of sync that's 1) news to me and 2) confusing and should be fixed. I think one of the things we're seeing is that Docker is opinionated, a number of powerful dev tools and frameworks are also opinionated, and us poor developers are stuck between a rock and a hard place when those opinions differ. For instance I'm still not clear how you'd use the docker-compose 'scale' argument with nginx.
Nginx needs to know what its upstreams are, and there's IIRC still an open issue about docker-compose renumbering links for no good reason, and some Docker employee offering up how that's a feature not a bug. I could punch him. Single use auth tokens and temporary keys sure would fix quite a few things, to be certain, but those opinions keep coming in and messing up good plans:/. Nothing about the freedom of the software has to do with whether or not you compensate the authors for creating it. If you take free software and never consider paying its developer for making it, despite them providing you freedom, choice, and a degree of trust in the software you can not have with proprietary code, then you are the kind of person to blame for why proprietary software is so rampant today. For example, I donate $200 to the Document Foundation every year to match the cost of an annual subscription to Office 365 plus a 33% bonus for respecting my freedom. We have been working on hypervisor.framework for more than 6 months now, since it came out to develop our native virtualization for OS X, As a result, we are able to distribute Veertu through the App Store.
Docker For Mac Edge
It’s the engine for “Fast” virtualization on OS X. And, we see now that docker is using it for containers. We wish that Apple would speed up the process of adding new Apis in this hypervisor.framework to support things like bridge networking, USB support, so everything can be done in a sandboxed fashion, without having to develop kernel drivers. I am sure docker folks have built their kernel drivers on top of xhyve framework. If you're using docker on mac, you're probably not using it there for easy scaling (which was the reason docker was created back then), but for the 'it just works' feeling when using your development environment.
But docker introduces far too much incidental complexity compared to simply using a good package manager. A good package manager can deliver the same 'it just works' feeling of docker while being far more lightweight.
I've wrote a blog post about this topic a few months ago, check it out if you're interested in a simpler way of building development environments. Let me explain Docker for Mac in a little more detail I work on this project at Docker. Previously in order to run Linux containers on a Mac, you needed to install VirtualBox and have an embedded Linux virtual machine that would run the Docker containers from the Mac CLI. There would be a network endpoint on your Mac that pointed at the Linux VM, and the two worlds are quite separate. Docker for Mac is a native MacOS X application that embeds a hypervisor (based on xhyve), a Linux distribution and filesystem and network sharing that is much more Mac native. You just drag-and-drop the Mac application to /Applications, run it, and the Docker CLI just works. The filesystem sharing maps OSX volumes seamlessly into the Linux container and remaps MacOS X UIDs into Linux ones (no more permissions problems), and the networking publishes ports to either `docker.local` or `localhost` depending on the configuration.
Docker Image For Linux
A lot of this only became possible in recent versions of OSX thanks to the Hypervisor.framework that has been bundled, and the hard work of mist64 who released xhyve (in turn based on bhyve in FreeBSD) that uses it. Most of the processes do not need root access and run as the user. We've also used some unikernel libaries from MirageOS to provide the filesystem and networking 'semantic translation' layers between OSX and Linux.
Inside the application is also the latest greatest Docker engine, and autoupdates to make it easy to keep uptodate. Although the app only runs Linux containers at present, the Docker engine is gaining support for non-Linux containers, so expect to see updates in this space. This first beta release aims to make the use of Linux containers as happy as possible on Windows and MacOS X, so please reports any bugs or feedback to us so we can sort that out first though:). Yes, quite a few issues of that nature have been fixed (and we are planning to open-source the changes later in the year once we stabilise the overall application).
The bug above has been reported to Apple and they've reportedly fixed it in the latest 10.11.4 seeds, but we've put in a workaround that detects ACPI sleep events and freezes vCPUs just before going into hibernate mode. None of the beta testers have reported any sleep crashes using Docker for Mac recently, so if you do see anything of this nature please let us know. Does anybody have any guides on setting up dev environments for code within Docker? I recall a Dockercon talk last year from Lyft about spinning up microservices locally using Docker. We're using Vagrant for development environments, and as the number of microservices grows - the feasibility of running the production stack locally decreases. I'd be interested in learning how to spin up five to ten docker services locally on OSX for service-oriented architecture.
This product from Docker has strong potential. I'm really excited to see this because I've spent the last few months experimenting with Docker to see if it's a viable alternative to Vagrant. I work for a web agency and currently, our engineers use customized Vagrant boxes for each of the projects that they work on. But that workflow doesn't scale and it's difficult to maintain a base box and all of the per project derivatives. This is why Docker seems like a no-brainer for us. However, it became very clear that we would have to implement our own tooling to make a similar environment. Things like resolving friendly domain names (project-foo.local or project-bar.local) and adding in a reverse proxy to have multiple projects use port 80.
Docker for Mac looks like it will solve at least the DNS issue. Can't wait to try it out. I am very excited about the new Mac app and I want to try it.
At the moment I use dlite. The thing I love about it is that it's transparent. I hope that the new Mac app has an option or mode to be like that too (start on system boot, doesn't create a new desktop window/gui, SSH from terminal would be enough for me). Something analogous to MacVim's -v flag; by default 'mvim' opens a new app with its own window, but 'mvim -v' starts Vim inside current terminal. Not a great analogy, sorry about that. I judge Microsoft a bit for not just discontinuing Home. Some of the missing features in Home are what I'd describe as 'immoral.'
In that they aren't just luxuries, they're important parts of the OS or security features e.g. Group Policy Editor: This is the primary place to modify hundreds of local computer settings. They could have left out Domain Join, and kept the local Group Policy Editor. Start Screen Control with Group Policy: Adds more group policy options to modify the start screen/menu look/feel.
Enterprise Mode Internet Explorer: Name notwithstanding, this allows people to use legacy webapps with modern IE. AppLocker: Security feature (isn't even in pro incidentally!). I'd turn it on. Bitlocker: Full drive encryption (with different decryption options). Credential Guard: Not used to protected non-domain credentials. Trusted Boot: Because home users don't get rootkits? Windows 10 Home is categorically less secure than Windows 10 Pro, which is in turn less secure than Windows 10 Enterprise.
Features like AppLocker, Credential Guard, Trusted Boot, are features that all versions of Windows could benefit from, and Bitlocker should be available and on by default. When you have a 'security' category in the feature list and are differentiating different versions of the OS then you really have to ask yourself how high you prioritise security in general.
The whole Docker ecosystem exists today because of every single developer who found ways of using Docker to improve how they build software; whether streamlining production deployments, speeding up continuous integration systems or standing up an application on your laptop to hack on. In this talk we want to take a step back and look at where Docker sits today from the software developers point of view - and then jump ahead and talk about where it might go in the future. In this talk, we’ll discuss:. Making Docker an everyday part of the developing software on the desktop, with Docker for Windows and Docker for Mac. Docker Compose, and the future of describing applications as code.
How Docker provides the best tools for developing applications destined to run on any Kubernetes cluster This session should be of interest to anyone who writes software; from people who want to hack on a few personal projects, to polyglot open source programmers and to professional developers working in tightly controlled environments. Everyone deserves a better developer experience. Docker for developers on mac and windows. 1. GARETH RUSHGROVE Product Manager, Docker Docker for Developers. Using Docker on your desktop - Building Docker images - Describing applications in code - Using IDE’s with Docker - Graphical tools for creating applications This Talk.
What Do We Mean By Developers?