Wednesday, February 15, 2017
Blog moved to http://awilkins.id.au/
In an effort to keep things simpler for myself, and minimise the effort involved to create content, I have moved my blog over to using Hugo. My blog now lives at https://awilkins.id.au.
Tuesday, August 19, 2014
Availability Zones in Juju
You would be forgiven for thinking that I'd fallen off the face of the earth, considering how long it has been since I last wrote. I've been busy with my day job, moving into a new house; life in general. Work on llgo has been progressing, mostly due to Peter Collingbourne. I'll have more to say about llgo's progress in future posts.
This post is about some of the work I've done on Juju recently. Well, semi-recently; this post has been sitting in my drafts for a little while, waiting for the new 1.20.5 release to be announced.
Availability Zones in Juju
One of the major focuses of the Juju 1.20 release has been around high availability (HA). There are two sides to this: high availability of Juju itself, and high availability of your deployed services. We’ll leave the “Juju itself” side for another day, and talk about HA charms/services.Until now, if you deployed a service via a charm with Juju, your cloud instance containing the service unit would be allocated wherever the cloud provider decided best. Most cloud providers split their compute services up into geographic regions (“us-east-1” in Amazon EC2, “US West” in Microsoft Azure, etc.). Some providers also break those regions down into “availability zones” (though the actual term may vary between providers, we use the term availability zone to describe the concept). An availability zone is essentially an isolated subset of a region.
If you’re developing an application that demands high availability, then you probably want to make sure your application is spread across availability zones. Some providers will guarantee a service level agreement (SLA) if you do this, such as on Microsoft Azure. Provided that you allocate at least two VMs to a “Cloud Service” on Azure, then you’re guaranteed a 99.95% uptime under the SLA and you get reimbursed if the guarantee isn’t met.
In Juju 1.20, there are two options for distributing your service units across availability zones: explicit (akin to machine placement) and automatic. So far we have enabled explicit availability zone placement in the Amazon EC2 and OpenStack (Havana onwards) providers, with support for the MAAS provider on the horizon. To add a new machine to a specific availability zone, use the “zone=” placement directive as below:
juju add-machine zone=us-east-1b
As well as support for explicit zone placement, we’ve implemented automatic spreading of services units across availability zones for Amazon EC2, OpenStack and Microsoft Azure. When cloud instances are provisioned, they will be allocated to an availability zone according to the density of the availability zone population for related instances. Two cloud instances are considered related if they both contain units of a common service, or if they are both Juju state servers.
To illustrate automatic spread, consider the mongodb charm. You’re going to use MongoDB as the datastore for your application, and you want to make sure the datastore is highly available; to do that, you’ll want to create a MongoDB replica set. It’s trivial to do this with the mongodb Juju charm:
juju deploy -n 3 mongodb
Wait a little while and you’ll have a 3-node MongoDB replica set. If a node happens to disappear, then the replica set will rejig itself so that there is a master (if the master was in fact lost) and everything should continue to work. If all the nodes go away, then you’re in trouble. This is where you want to go a step further and ensure your nodes are distributed across availability zones for greater resilience to failure. As of Juju 1.20, that “juju deploy” you just did handles that all for you: your 3 nodes will be uniformly spread across availability zones in the environment. If you add units to the service, they will also be spread across the zones according to how many other units of the service are in the zones. Let’s see what Juju did…
$ juju status mongodb | grep instance-id
instance-id: i-7a6d2b50
instance-id: i-ff1562d4
instance-id: i-627f0a30
$ ec2-describe-availability-zones
AVAILABILITYZONE us-east-1a available us-east-1
AVAILABILITYZONE us-east-1b available us-east-1
AVAILABILITYZONE us-east-1d available us-east-1
$ ec2-describe-instance-status i-7a6d2b50 i-ff1562d4 i-627f0a30 | grep i-
INSTANCE i-627f0a30 us-east-1d running 16 ok ok active
INSTANCE i-ff1562d4 us-east-1a running 16 ok ok active
INSTANCE i-7a6d2b50 us-east-1b running 16 ok ok active
(Note: the ec2-* commands are available in the ec2-api-tools package.)
Juju has distributed the mongodb units so that there is one in each zone, so if one zone is impaired the others will be unaffected. If we add a unit, it will go into one of the zones with the fewest mongodb units.
Explicit placement is currently only supported by Juju’s Amazon EC2 and OpenStack providers, but automatic spread is also supported by the Microsoft Azure provider. Due to the way that Microsoft Azure ties together availability zones and load balancing, it is currently necessary to forego “density” (i.e. explicit machine placement) in order to support automatic spread. If you are upgrading an existing environment to 1.20, then automatic spread will not be enabled. Newly created environments enable spread (and disable placement) by default, with an option to disable (availability-sets-enabled=false in environments.yaml).
Enjoy.
Monday, January 6, 2014
llgo on ssa
Hello there!
I've been busy hacking on llgo again. In case you're new here: llgo is a Go frontend for LLVM that I've been working on for the past ~2 years on and off. It's been quite a while since I last wrote; there has been a bunch of new work since, so I have some things to talk about at last.
A few months ago, I started working on rewriting swathes of llgo's internals to base it on go.tools/ssa. LLVM uses an SSA representation, which made the process fairly straightforward. Basing llgo on go.tools/ssa gives me much higher confidence in the quality of the output; it also presented a good opportunity to clean up llgo's source itself, which I have begun, but certainly not finished. llgo is now able to compile all packages in the standard library, except those that require cgo (net, os/user, runtime/cgo).
llgo now works something like this:
go.tools/ssa supports translating a whole program to SSA form, but llgo works in the traditional way: packages are translated one at a time. Whole program optimisation is enabled by linking the LLVM modules together, prior to any translation to machine code.
There were a few bits that I stumbled on, when rewriting. Alan Donovan, the author of go.tools/ssa, was kind enough to give me some assistance along the way. Anyway, the main issues I had were:
Various significant changes have been made during the course of the migration to go.tools/ssa:
There have also been miscellaneous bug fixes, and improvements, not directly related to the move to go.tools/ssa. Some highlights:
I think that's everything. I have various things I'd like to tackle now, but not enough time to do it all at once. If you're interested in helping out then there's plenty to do, including:
If you want to have a play around, then grab LLVM and Go, and then:
Cheers,
Andrew
I've been busy hacking on llgo again. In case you're new here: llgo is a Go frontend for LLVM that I've been working on for the past ~2 years on and off. It's been quite a while since I last wrote; there has been a bunch of new work since, so I have some things to talk about at last.
A few months ago, I started working on rewriting swathes of llgo's internals to base it on go.tools/ssa. LLVM uses an SSA representation, which made the process fairly straightforward. Basing llgo on go.tools/ssa gives me much higher confidence in the quality of the output; it also presented a good opportunity to clean up llgo's source itself, which I have begun, but certainly not finished. llgo is now able to compile all packages in the standard library, except those that require cgo (net, os/user, runtime/cgo).
llgo now works something like this:
- Go source is scanned and parsed by go/ast and go/parser, producing an AST;
- The AST is fed into go/types, type-checking;
- The output of go/types is passed onto go.tools/ssa, which generates the SSA form;
- llgo translates the go.tools/ssa SSA form into an LLVM module,
- llgo-build links the LLVM modules for a program together and translates to an executable.
go.tools/ssa supports translating a whole program to SSA form, but llgo works in the traditional way: packages are translated one at a time. Whole program optimisation is enabled by linking the LLVM modules together, prior to any translation to machine code.
There were a few bits that I stumbled on, when rewriting. Alan Donovan, the author of go.tools/ssa, was kind enough to give me some assistance along the way. Anyway, the main issues I had were:
- Translating Phi nodes requires a bit of finessing, to ensure processing of the Phi or the edges is not order-sensitive. This was dealt with by generating placeholder values for instructions that haven't yet been visited, and then replacing them later.
- ssa.Index is emitted for indexing into arrays. If an array is in a register, then indexing it means extracting a value; in LLVM, an array element extraction requires a constant index. This is currently kludged by storing to a temporary alloca, and using the getelementptr LLVM instruction. Hopefully I'm missing something and this is easily fixed.
- The Recover block is not dominated by the entry block, so it may not be valid for it to refer to the Alloc instructions for parameters and results. To deal with this, I generate a prologue block that contains the param/result Allocs; the prologue block conditionally jumps to either the recover or entry block, depending on panic/recover control flow. Alan has agreed to do something along these lines in go.tools/ssa.
- The ssa.Next instruction required some assumptions to be made about block ordering and instruction placement, in order to be able to translate string-range using Phi nodes. Recent changes to go.tools/ssa exposed the dominator tree, making it possible to do away with the assumption now.
Various significant changes have been made during the course of the migration to go.tools/ssa:
- Interfaces are now represented like in gc: empty interfaces with the runtime type & data, non-empty interfaces with an "itab" and data. Russ Cox wrote an article about the interface representation back in 2009.
- Panic/recover (and defer, by consequence) are now using setjmp/longjmp. I had been using exceptions, but it was rightly pointed out to me that this wouldn't work unless there were a way of doing non-call exceptions in LLVM (which has not been implemented). The setjmp/longjmp approach incurs a cost for every function that may defer or recover, but it works without modifications to LLVM. Perhaps this will be revisited in the future.
- go/types/typemap is now used for mapping types.Types to runtime type descriptors and LLVM types. Runtime type descriptors are now generated more completely, and more correctly. Identical type descriptors will now be merged at link-time.
- llgo no longer generates conditional branching for calls to non-global functions, when comparing structs, or in map iteration. Apart from producing better code, this makes it much simpler to work with go.tools/ssa, which has its own idea about how the SSA basic blocks relate to one another.
There have also been miscellaneous bug fixes, and improvements, not directly related to the move to go.tools/ssa. Some highlights:
- A custom importer/exporter, thanks to Fredrik Ehnbom. The importer side is disabled at the moment, due to an apparent bug in go.tools/ssa.
- Debugger support, thanks again to Fredrik Ehnbom. I haven't reenabled it since the move to go.tools/ssa. I'll get onto that real soon now, because debugging without it can be tiresome.
- llgo-build can now take a "-test" flag that causes llgo-build to compile the test Go files, yet again thanks to Fredrik Ehnbom. This is currently reliant on the binary importer being enabled, so it won't work out of the box until that bug is fixed.
- Shifts now generate correct values for shifts greater than the width of the lhs operand.
- Signed integer conversions now sign-extend correctly.
- bytes.Compare now works as it should (-1, 0, 1, not <0, 0, >0). "llgo-build -test bytes": PASS
- llgo-build can now take a "-run" flag that causes llgo-build to execute and then dispose of the resulting binary.
- Type strings are propagated to LLVM types, making the IR more legible, thanks to Travis Cline.
I think that's everything. I have various things I'd like to tackle now, but not enough time to do it all at once. If you're interested in helping out then there's plenty to do, including:
- Move to using libgo. Ideally the gc runtime would be rewritten in Go already, but that's not going to happen just yet. The compiler and linker are due to be rewritten in Go soon, which is a lot of work as it is.
- Finish off runtime type descriptor generation (notably, type algorithms).
- Get PNaCl support working again. This should be pretty close, but requires the binary importer to be enabled.
- Implement cgo support.
- Implement bounds checking, nil pointer checks, etc.
- Get garbage collection working. There's Pull Request #108, but this is perpetuating the problem that is llgo's custom runtime. Since GC is fairly invasive, I don't want to go tying llgo to that runtime any more than it is currently. I expect this will have to wait until libgo is integrated.
- Escape analysis. This is a must-have, but not immediately necessarily. The implementation should be based on go.tools/ssa, interfacing with the exporter/importer to record/consume information about external functions.
- Make use of go.tools/ssa/ssautil/Switches. This is an optimisation, so again, not immediately necessary.
If you want to have a play around, then grab LLVM and Go, and then:
- go get github.com/axw/llgo/cmd/llgo-dist && llgo-dist
- llgo-build <some/package> or llgo-build file1.go, file2.go, ...
Let me know how you get on with that.
Here's hoping 2014 can be a productive year for llgo. Happy new year.
Cheers,
Andrew
Friday, August 16, 2013
llgo update #14
Ahoy there, mateys!
It's been three months since our last correspondence. Apologies for the negligence. I've been busy, as usual, but it's more self-inflicted than usual. I've taken up a new role at Canonical, working on Juju. I'm really excited about Juju (both the concept and realisation), and the fact that it's written in Go is icing on the cake. Working remotely is taking some getting used to, but so far it's been pretty swell. Anyway, you didn't come here to read about that, did you?
I'm still working on llgo in the background, quietly prodding it along towards the 0.1 milestone. There's just one big ticket item left, and that's partially done now: channels. I've just finished porting the basics of channels from gc's standard library to llgo's runtime. That doesn't include select, which is entirely missing. When that's done, I'll be content to release 0.1.
So what's new since last time?
It's been three months since our last correspondence. Apologies for the negligence. I've been busy, as usual, but it's more self-inflicted than usual. I've taken up a new role at Canonical, working on Juju. I'm really excited about Juju (both the concept and realisation), and the fact that it's written in Go is icing on the cake. Working remotely is taking some getting used to, but so far it's been pretty swell. Anyway, you didn't come here to read about that, did you?
I'm still working on llgo in the background, quietly prodding it along towards the 0.1 milestone. There's just one big ticket item left, and that's partially done now: channels. I've just finished porting the basics of channels from gc's standard library to llgo's runtime. That doesn't include select, which is entirely missing. When that's done, I'll be content to release 0.1.
So what's new since last time?
- There's a new llgo-build tool, which takes the pain out of building packages and programs with llgo and the LLVM toolchain. Just run "llgo-build <package>", and you'll either build and install a package, or build a program in the working directory. There's no freshness checking, so you're currently required to manually build all dependencies before building a program.
- Simplified building against PNaCl: llgo-dist now accepts a "-pepper" option, which points to a NaCl SDK.
- Implemented support for map literals.
- Implemented complex number arithmetic.
- Implemented channels (apart from anything select-related)
- Numerous bug fixes.
In my previous post I talked about having implemented panic/recover, and having implemented them in terms of DWARF exception handling. Well, it looks like PNaCl isn't going to support that, at least initially, so a setjmp/longjmp version is likely inevitable now.
I also said I would be working on a temporary for of cmd/go. I gave up on that, after hitting a few stumbling blocks. I figured it was more important to actually get the compiler and runtime working than get bogged down in the tooling, hence the simpler llgo-build tool.
That's about it! "Feature complete" is getting closer, though lots of things still don't work very nicely. Still no garbage collection, no proper escape analysis, etc. Those will come in time.
For now, though... I think I might go catch up on some sleep.
Saturday, May 18, 2013
llgo on Go 1.1
Hi folks,
(For those of you coming from HN/Twitter/elsewhere, this is a post about llgo. llgo is an LLVM frontend for the Go programming language).
In my last post I mentioned that work had began on moving to Go 1.1 compatibility; this has been my primary focus since then. Since Go 1.1 is now released (woohoo!), I've gone ahead and pulled all the changes back into the master branch on GitHub. If you want to play around, you can do the following:
The biggest new feature would have to be: defer, panic and recover (I'm lumping them together as they're closely related). I've implemented them on top of LLVM's exception handling support. The panic and recover functions are currently tied to DWARF exception handling, though it's simple enough that it should be feasible to use setjmp/longjmp on platforms where DWARF exception handling isn't viable.
Aside from that, there's some new bits and bobs:
(For those of you coming from HN/Twitter/elsewhere, this is a post about llgo. llgo is an LLVM frontend for the Go programming language).
In my last post I mentioned that work had began on moving to Go 1.1 compatibility; this has been my primary focus since then. Since Go 1.1 is now released (woohoo!), I've gone ahead and pulled all the changes back into the master branch on GitHub. If you want to play around, you can do the following:
- Get Go 1.1.
- Get Clang and LLVM (I've tested with 3.2, Ubuntu x86-64). Make sure llvm-config is in your $PATH.
- Run "go get github.com/axw/llgo/cmd/llgo-dist"
- Run "llgo-dist". This will install llgo into $GOBIN, and build the runtime.
The biggest new feature would have to be: defer, panic and recover (I'm lumping them together as they're closely related). I've implemented them on top of LLVM's exception handling support. The panic and recover functions are currently tied to DWARF exception handling, though it's simple enough that it should be feasible to use setjmp/longjmp on platforms where DWARF exception handling isn't viable.
Aside from that, there's some new bits and bobs:
- Method sets are handled properly now (or at least not completely wrong like before). This means you can use a embedded types' methods to satisfy an interface.
- "return" requirements are now checked by go/types
- cap() is now implemented for slices.
- llgo-dist now builds against the LLVM static libraries (if available) by default now, with an option for building against the shared libraries.
I'll be working on a temporary fork of cmd/go to build programs with llgo, while a long-term solution is figured out. I'd also like to get PNaCl integration working again, given that its release is nigh.
That's all for now.
Friday, March 1, 2013
llgo update #12
Oh my, it's been a while.
In my previous post I wrote about llgo and PNaCl. I haven't had much time to play with PNaCl recently, but I have been prodding llgo along. In February, my wife gave birth to our son, Jeremy, so naturally I've been busy. But anyway, let's talk about what has been happening in llgo. Quick, while he's sleeping!
Feature-wise, there's nothing terribly exciting going on. Without getting too boring, what's new is:
In my previous post I wrote about llgo and PNaCl. I haven't had much time to play with PNaCl recently, but I have been prodding llgo along. In February, my wife gave birth to our son, Jeremy, so naturally I've been busy. But anyway, let's talk about what has been happening in llgo. Quick, while he's sleeping!
Feature-wise, there's nothing terribly exciting going on. Without getting too boring, what's new is:
- A new "go1.1" branch in the Git repository. The go1.1 branch aims to make llgo compatible with the Go tip, and will replace the master branch when Go 1.1 is released.
- Removed llgo/types (a fork of the old exp/types package), and moved to go/types.
- Updated runtime type representations to match those from gc's tip (thanks to minux for initiating this effort).
- Updated to use architecture-specific size for "int" (same as uintptr).
- Changed function representation to be a pair of pointers, to avoid trampolines/runtime code generation for closures. The rationale is the same as for rsc's proposal for Go 1.1; using runtime code generation limits the environments that Go can run in (e.g. PNaCl).
- A slew of bug fixes and minor enhancements.
The go/types change in particular was not a small one, but llgo came out much better at the end. As of the most recent go/types commits, llgo now passes all of its tests in the go1.1 branch. Now I can get back to implementing features again.
That's about all there is to report. It has been suggested that I set up some milestones in the GitHub project; I will spend a bit of time coming up with what I think are the bare essentials for a 0.1 release, and what would constitute future releases and so on.
One last thing: there's a new(ish) llgo-dev mailing list. If you want to get involved, or just lurk, come and join the party.
Until next time.
Sunday, December 9, 2012
Go in the Browser: llgo does PNaCl
Last week I briefly reported on Google+ that I had written a Go-based Native Client module, built
it with llgo, and successfully loaded it into Google Chrome. I'd like
to expand on this a little now, and describe how to build and run it.
Before your start...
If you want to
want to try this out yourself, then you'll need to grab yourself a
copy of the Native Client SDK. I've only tested this on Ubuntu Linux
12.10 (x86-64), so if you're trying this out on a different OS/arch
you may need to alter the instructions.
Anyway, grab the SDK according to the instructions on the page I linked to above. Be sure to get the devevelopment/unstable branch, by updating with the "pepper_canary" target:
$
cd nacl_sdk; ./naclsdk
update pepper_canary
This is not a
small download, so go and brew some tea, or just read on to see where
we're going with this.
The anatomy of a PNaCl module
By
now I guess you probably know what
Native Client is, but
if you don't, I suggest you take a moment to read about it on the
Google Developers (https://developers.google.com/native-client/)
site.
What may not be so
well known is PNaCl, the next evolution of Native Client. PNaCl
(pronounced pinnacle), is short for Portable
Native Client, and
is based on LLVM.
Developers
continue
to write
their code the same as in traditional NaCl, but now it is compiled to
LLVM bitcode;
PNaCl
restricts usage to a portable subset of bitcode so
that it can then be translated to native x86, x86-64, or ARM machine
code. To
compile C or C++ modules to PNaCl/LLVM bitcode, one uses the
pnacl-clang compiler provided with the Native Client SDK.
To
make use of Native Client, one develops a module,
which is an executable, that can be loaded into Google Chrome (or
Chromium).
A
module implements certain functions specified in the Pepper API
(PPAPI),
which is the API that interfaces your module with
the browser. One
of the functions is PPP_InitializeModule,
and another is PPP_GetInterface.
The
former provides a function pointer to the module for calling back
into the browser; the latter is invoked to interrogate the module for
interfaces
that it implements.
A nacl/ppapi package for Go
Since llgo speaks LLVM, it should be feasible to write PNaCl modules
in Go, right? Right! So I set about doing this last week, and found
that it was fairly easy
to do. I have written a demo module which you can find here:
https://github.com/axw/llgo/tree/master/pkg/nacl/ppapi,
which I later intend to morph into a reusable Go package, with a
proper API. I have made a lot of shortcuts, and the code is not
particularly idiomatic
Go; bear in mind that llgo is still quite immature, and that this is
mostly a proof of concept.
Most of the code in the package is scaffolding; the example module is
mostly defined in example.go, some also in ppapi.go. At
the top of example.go, we instantiate a pppInstance1_1, which is a
structure which defines the “Instance” interface. This interface
is used to communicate the lifecycle of an instance of the module;
when a module is loaded in a web page, then this interface is
invoked. We care about when a module instance is created, and when it
is attached to a view (i.e. the area of the page which contains
the module). Note that when I say interface, I mean a PPAPI
interface, not a Go interface. Later, I hope to have modules
implement Go interfaces, and hide the translation to PPAPI
interfaces.
The example is contrived, and quite simple; it demonstrates the use
of the Graphics2D interface, which, as the name suggests,
enables a module to perform 2D graphics operations. The demo simply
draws repeating rectangles of different colours, animated by
regularly updating the graphics context and shifting the pixels on
each iteration. I would have used the standard “image” Go
package, but unfortunately llgo is currently having trouble compiling
it. I'll look into that soon.
Building llgo
Alright,
how do we build this thing? We're going to do the following things:
- Build llgo, and related tools.
- Compile the PNaCl-module Go code into an LLVM module.
- Link the llgo runtime into the module.
- Link the ppapi library from the Native Client SDK into the module.
- Translate the module into a native executable.*
*The final step is
currently necessary, but eventually Chrome/Chromium will perform the
translation in the browser.
Let's
begin by building the llgo-dist
tool. This will be
used to
build the llgo compiler, runtime, and linker. More on each of those
in a moment. Go ahead and build llgo-dist:
$
go get
github.com/axw/llgo/cmd/llgo-dist
The
llgo-dist tool takes two options: -llvm-config,
and -triple.
The former is the path to the llvm-config
tool, and defaults to simply “llvm-config” (i.e. find it using
PATH). The latter is the
LLVM target triple used
for compiling the
runtime package (and other
core
packages, like syscall). The
Native Client SDK contains an llvm-config and the shared library that
we need to link with to use LLVM's C API.
As I said above, I'm running on Linux x86-64, so for my case, the
llvm-config tool can be found in:
$
nacl_sdk/pepper_canary/toolchain/linux_x86_pnacl/host_x86_64/bin/llvm-config
At
this point, you should put the “host_<arch>/bin” directory
in your PATH, and the “host_<arch>/lib” directory in your
LD_LIBRARY_PATH, as llgo
currently requires it, and I refer to
executables without their full paths in
some cases.
The
Native Client SDK creates
shared libraries with the target armv7-none-linux-gnueabi,
so we'll do the same. Let's go ahead and build llgo now.
$
llgo-dist
-triple=armv7-none-linux-gnueabi
-llvm-config=nacl_sdk/pepper_canary/toolchain/linux_x86_pnacl/host_x86_64/bin/llvm-config
We
now have a compiler, linker, and runtime. As an aside, on my laptop
it took about 2.5s to build, which
is great! The gc toolchain
is a wonderful thing.
You can safely ignore the
warning about “different data layouts” when llgo-dist compiles
the syscall package, as
we
will not be using the syscall package in our example.
Building the example
Now, let's compile the PNaCl module:
$
llgo -c -o main.o
-triple=armv7-none-linux-gnueabi llgo/pkg/nacl/ppapi/*.go llgo/testdata/programs/nacl/example.go
This creates a file called “main.o”, which contains the LLVM
bitcode for the module. Next, we'll link in the runtime. Eventually,
I hope that the “go” tool will be able to support llgo (I have
hacked mine up to do this), but for now you're going to have to do
this manually.
$
llgo-link -o main.o main.o
$GOPATH/pkg/llgo/armv7-none-linux-gnueabi/runtime.a
Now
we have a module with the runtime linked in. The
llgo runtime defines things like functions for appending to slices,
manipulating maps, etc. Later, it will contain a more sophisticated
memory allocator, a garbage collector runtime, and a goroutine
scheduler.
We
can't translate this to a native executable yet, because it lacks an
entry point. In a PNaCl module, the entry point is defined in a
shared library called libppapi_stub.a,
which is included by the
libppapi.a linker script.
We
can link this in using pnacl-clang, like so:
$
pnacl-clang
-o main.pexe main.o -lppapi
This
creates a portable
executable (.pexe), an
executable still in LLVM bitcode form. As
I mentioned earlier, this will eventually be the finished product,
ready to load into Chrome/Chromium. For now, we need to run a final
step to create the native machine code executable:
$
pnacl-translate -arch x86-64
-o main_x86_64.nexe main.pexe
That's it. If you want to load this in an x86 or ARM system, you'll
also need to translate the pexe to an x86 and/or ARM nexe. Now we can
run it.
Loading the PNaCl module into Chrome
I'm
not sure at what point all the necessary parts became available in
Chrome/Chromium, so I'll just say what I'm running: I have added the
Google Chrome PPA, and installed google-chrome-beta. This
is currently at version 24.0.1312.35 beta.
By
default, Chrome only allows Native Client modules to load from the
Chrome Web Store,
but you can override this by mucking about in about:flags. Load
up Chrome, go to about:flags, enable
“Native Client”, and
restart Chrome so the change takes effect.
Curiously, there's a
“Portable Native Client” flag; it may be that the translator is
already inside Chrome, but I'm not aware of how to use it.
To simplify matters, I'm going to hijack the hello_world example in
the Native Client SDK. If you want to start from scratch, refer to the Native Client SDK
documentation. So anyway we'll build the hello_world example, then
replace the executable with our own one.
$ cd
nacl_sdk/examples/hello_world
$ make
pnacl/Release/hello_world.nmf
$ cp
<path/to/main_x86_64.nexe>
pnacl/Release/hello_world_x86_64.nexe
Now start an HTTP server to serve this application (inside the
hello_world directory):
$ python -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...
Finally, navigate to the following location:
Behold,
animated bars! Obviously the
example is awfully simplistic, but the I wanted to get this out so
others can start playing with it. I'm not really
in the business of fancy
graphics, so I'll leave more impressive demos to others.
Next Steps
I'll keep dabbling with this, but my more immediate goals are to
complete llgo's general functionality. As wonderful as all of this
is, it's no good if the compiler doesn't work correctly. Anyway, once
I do get some more time for this, I intend to:
- Clean up nacl/ppapi, providing an external API.
- Update llgo-link to transform a “main” function into a global constructor (i.e. an “init” function) when compiling for PNaCl.
- Update llgo-link to link in libppapi_stub.a when compiling for PNaCl, so we don't need to use pnacl-clang. Ideally we should be able to “go build”, and have that immediately ready to be loaded into Chrome.
- Get the image package to build, and update nacl/ppapi to use it.
- Implement syscall for PNaCl. This will probably involve calling standard POSIX C functions, like read, write, mmap, etc. Native Client code is heavily sandboxed, but provides familiar POSIX APIs to do things like file I/O.
If
you play around with this and produce
something
interesting, please let me know.
That's
all for
now
–
have
fun!
Subscribe to:
Posts (Atom)