Running gofmt in Browser with 10 Lines of Code (Using GopherJS)

I saw a message in Gophers Slack #general chat that stood out to me:

but running gofmt in the browser is, um, hard

The context was that they had a 100%-client-side JavaScript application that produced some Go code, but the Go code wasn't gofmted.

"Wait a minute, that's not hard," I thought. "It's trivial! Getting gofmt to run in the browser? I bet I could do it in 10 lines of code!"

Right?

Then I realized. It's not trivial. It's not obvious. It just seems that way to me because I've done a lot of things like this. But many people probably haven't.

So that inspired me to write this blog post showing how you, too, can have gofmt functionality running in the browser alongside your JavaScript code very easily. The only thing I'll assume is you're a Go user and have a working Go installation (otherwise, you're unlikely to be interested in gofmt).

gofmt has relatively complex behavior, so how can it all be implemented in just 10 lines? The trick is that we don't have to rewrite it all from scratch in JavaScript. Instead, we can write it in Go, as that'll be much easier.

To get Go to run in the browser, we'll use GopherJS. GopherJS is a compiler that compiles Go into JavaScript, which can then run in browsers. We'll use two Go packages:

Building JavaScript that implements gofmt

So, let's get started. I am assuming you already have the current version of Go installed. First, you'll need to install the GopherJS compiler if you don't already have it. Its README has a detailed Installation and Usage section, but it boils down to running one command:

$ go get -u github.com/gopherjs/gopherjs

If you have your $GOPATH/bin added to your PATH, you'll be able to easily invoke the installed gopherjs binary.

Now, let's create a small Go package that'll implement our functionality. Make a new directory somewhere in your GOPATH, cd into it and write a main.go:

$ mkdir -p $(go env GOPATH)/src/github.com/you/gofmt
$ cd $(go env GOPATH)/src/github.com/you/gofmt
$ touch main.go
package main

import ("go/format"; "github.com/gopherjs/gopherjs/js")

func main() {
	js.Global.Set("gofmt", func(code string) string {
		gofmted, _ := format.Source([]byte(code))
		return string(gofmted)
	})
}

Then run gopherjs build in same directory to build it:

$ gopherjs build --minify

The --minify flag causes it to write minified output. (You'd also better use HTTP compression ala Content-Encoding: gzip when serving it, because the generated JavaScript compresses very well.)

The output will be in the form of a gofmt.js file (and an optional source map in gofmt.js.map, useful during development). We can then include it in any HTML page:

<script src="gofmt.js" type="text/javascript"></script>

Once executed, a gofmt function will be available for the rest of your JavaScript code to use. The function will format any Go code you provide as a string, returning the output as a string:

Image

That's 10 lines which implements gofmt—we're done! Okay, not quite. I'm ignoring errors for now, and the code itself is not gofmted. We'll come back and fix that next, but first, let's see what's happening in the code.

Code Explanation

First, we're creating a function literal that takes a string as input and returns a string. format.Source is used to format the incoming Go source code, and then it's returned. We're reusing a lot of existing Go code, the same that the real gofmt command uses, which is well tested and maintained.

Then, we're using js.Global.Set with the parameters "gofmt" and our function literal.

js.Global is documented as:

Global gives JavaScript's global object ("window" for browsers and "GLOBAL" for Node.js).

And Set is:

Set assigns the value to the object's property with the given key.

What we're doing is assigning the JavaScript variable named "gofmt" with the value of a func we've created.

GopherJS performs the neccessary conversions between Go types and JavaScript types, as described in the table at the top of https://godoc.org/github.com/gopherjs/gopherjs/js. Specifically:

Go type JavaScript type Conversions back to interface{}
string String string
functions Function func(...interface{}) *js.Object

So Go's string becomes a JavaScript String type, and a Go func becomes JavaScript Function.

By making that js.Global.Set("gofmt", func (code string) string { ... }) call, what we've done can be effectively expressed with the following JavaScript code:

window.gofmt = function(code) {
    // Implementation of gofmt, in JavaScript, done for you by the GopherJS compiler.
    // Lots and lots of generated JavaScript code here...
    return gofmted;
}

Just like that, you've got an implementation of gofmt that can run in the browser.

Finishing Touches

Let's come back to the code and make it nicer. We'll gofmt it, to avoid the irony, and add some error checking:

package main

import (
	"go/format"

	"github.com/gopherjs/gopherjs/js"
)

func main() {
	js.Global.Set("gofmt", func(code string) string {
		gofmted, err := format.Source([]byte(code))
		if err != nil {
			return err.Error()
		}
		return string(gofmted)
	})
}

Now we get nicer output when there's an error formatting the code:

Image

But maybe you'd rather just fall back to showing the original input if formatting fails:

gofmted, err := format.Source([]byte(code))
if err != nil {
	return code
}

Or maybe you want to include the error message on top:

gofmted, err := format.Source([]byte(code))
if err != nil {
	return err.Error() + "\n\n" + code
}

Or maybe return a boolean indicating success:

func(code string) (string, bool) {
	gofmted, err := format.Source([]byte(code))
	if err != nil {
		return code, false
	}
	return string(gofmted), true
}

This depends on the exact needs of your JavaScript application. I'll leave it to you.

Additional Notes

One of potential concerns might be that the generated JavaScript is relatively large, so it may negatively impact page loading time. There are tradeoffs in compiling Go to JavaScript, and large size is one of them.

I have two thoughts on that:

  1. You can always do progressive enhancement. Load your page as usual, but use async attribute in the script tag for the gofmt code. Page loads as quickly as before, but during the first second, user can't gofmt (it can print a nice placeholder message like "hang on, that functionality is still loading...", or maybe fall back to a remote server). Once the gofmt functionality loads, it starts working locally. Most people are unlikely to notice that first second the app isn't completely functional yet. And on second load, it'll be cached, etc.

  2. In the future, WebAssembly will let you do/write things that will beat pure JavaScript in terms of parse/load speed. The Go code you write today will work largely unmodified, but become more optimized in the future. The large generated JavaScript output is a short term implementation detail.

In the end, there are tradeoffs between various approaches, so you should use the tools and approaches that make most sense for your project.

Conclusion

Hopefully, you saw just how easy it is to take some non-trivial functionality that happens to be already written in Go, and expose it to your JavaScript application.

Next week, I'll post another example of using net/http, encoding/csv packages in the browser to stream CSV data from a server and parse it on the fly, so stay tuned.

Leave a reaction at the bottom of this post if you found it helpful, and comment if you have feedback.

3 comments

Using method values to get rid of global variables

Have you ever had a piece of Go code like this:

// TODO: Get rid of this global variable.
var foo service

func fooHandler(w http.ResponseWriter, req *http.Request) {
	// code that uses foo
}

func main() {
	foo = initFoo()

	http.HandleFunc("/foo", fooHandler)
}

One way to get rid of that global variable is to use method values:

type fooHandler struct {
	foo service
}

func (h fooHandler) Handle(w http.ResponseWriter, req *http.Request) {
	// code that uses h.foo
}

func main() {
	foo := initFoo()

	http.HandleFunc("/foo", fooHandler{foo: foo}.Handle)
}

Of course, net/http also has http.Handle so in this case you can simply change fooHandler to implement http.Handler interface:

type fooHandler struct {
	foo service
}

func (h fooHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
	// code that uses h.foo
}

func main() {
	foo := initFoo()

	http.Handle("/foo", fooHandler{foo: foo})
}

But method values are great when you need to provide a func, for example in https://godoc.org/path/filepath#Walk or https://godoc.org/net/http#Server.ConnState.

2 comments

Trying out the Oculus Rift CV1 for the first time

Last night I got a chance to setup and spend a few hours with the long-awaited first consumer version of the Oculus Rift (aka Oculus Rift CV1). I'll describe my experience in some detail.

Background

This is not my first time trying on a VR headset. I've tried the first developer kit (DK1) a long time ago, but only for a few minutes. It was compelling enough to make me understand that the final release, scheduled a few years off at the time, would be incredible, but of course there were a few "must fix this and must make that better first" items remaining. I had to use my imagination to predict what it'd feel like in the end. I tried two experiences: a pretty basic unexciting demo of pong in 3D, and a virtual walkthrough through Japan in the form of a 360 video. The latter was great for making me feel the presence factor and ability to experience an exotic real-world location.

Some time later, I tried the second dev kit (DK2) with a pretty sweet experience. There was a demo booth at a movie theater that was playing Interstellar, and they had a 5-8 minute Interstellar-themed experience. You sat in a nice, comfy chair, they put on the DK2 and a pair of high-quality headphones, and you were transported into a situation on board of a futuristic spaceship. It was a high-budget demo, so the 3D environment was professionally made with lots of detail. You could look around, and primarily, you could feel the sense of presence... Actually being in that spaceship, and exploring what it had to offer. My favorite parts were the section where they made you feel weightlessness. With a countdown from 3, the experience creators used a combination of motion, a "whoosh" sound, and the chair you were sitting in slightly dropping down, to very convincingly make you feel that suddenly gravity was turned off. The other part I loved was the end, where you got to a very detailed cockpit of the spaceship, complete with many controls, buttons, joysticks. When you looked out the windows, you could see some planet in the nearby distance...

Initial setup

Ok, fast forward to now and the experience of unboxing and setting up the consumer version of the Rift.

From the moment you see the box itself, you can tell it's a very high-quality and well made product. As you release the side magnetically held in place and open up the briefcase, your immediate reaction is confirmed. This is a really cool product. The unboxing and setup experience is quite close to what you'll get with Apple products. If you appreciate that, you'll like this.

I'm not going to talk much about the PC part of the equation (because I dislike it), so I'll say this to get it out of the way. We got an Oculus Ready PC from their recommended list. I think it's great they have that list so you can pick something you know will work, but it's not a great experience to setup, have and maintain a Windows PC in my opinion. I wish it'd "just work" with something like OS X (which I use) or Linux (because it's open source and makes a better potential foundation for everything), or possibly the PS4 or the future generation consoles. Until then, I had to setup Windows, uninstall McAfee anti-virus which was preinstalled and causing problems, and put up with Windows. If your standards are not as high as mine, you might be fine with it, but I didn't think it was great. If you want more reasons why Windows isn't great, just look at some tweets from Jonathan Blow, he points it out pretty often.

When you open up the box, you see a large note saying "To setup your Oculus Rift, go to oculus.com/setup", which is pretty much what I'd hope for in 2016. You run their installer, and it says it'll take 30-60 minutes to set everything up. I found that to be quite accurate. It walks you through everything step by step with very friendly directions, and in general worked well (aside from McAfee thing causing it to not install at first). Having a computer that meets the system requirements is absolutely mandatory for you to have a good time.

The only minor issue I ran into was getting a plastic tab out of the remote so that its battery would form a closed loop. I literally had to use pliers to pull the plastic tab out. After that, everything was fine (and everything was 100% smooth). The setup then proceeded to download and update the firmware on the tiny little remote, which was pretty telling of the age we live in, and then onto next steps.

Putting it on

Halfway through the setup, once everything is connected, you've placed the sensor, made some room, and learned how to adjust the straps so it can sit comfortably on your head, the setup asks you to put on your remote, and then the rift, in order to proceed with the rest of the setup... in virtual reality.

The first thing you see is a futuristic, mostly all white scene, with cool smoke effects, and large text guiding you in front of you. You can also see the tracking sensor, which I've placed on the table in front, in VR.

The coolest part is how it maps 1:1 to the sensor's actual location in the real world. The tracking and responsiveness felt absolutely perfect to me. There is full degrees of motion tracking, so you could move your head anywhere, tilt it, rotate it, and it all just worked. I did not think or feel that there is a computer rendering frames at a certain frame-rate (not until later, anyway, when I looked up some developer docs). Instead, it felt like magic. You felt as if you're inside a virtual world that you could comfortably look around.

The setup concluded with a brief sample demo VR experience, made of a few short standing scenes. A cartoony field with a moose and rabbit in front of a campfire. Then, another planet with an alien standing right in front of you, looking at you while mumbling, and making some faces. It felt like he was right there, and I just stood still in disbelief of how realistic it felt. Then, a scene with a huge t-rex dinosaur walking up to you in a hallway. As I stood there, I couldn't help but grin in anticipation of learning what will happen next. "There's no way he'll try to eat me, this has got to be a friendly demo... right?" I was definitely a little freaked out by that possibility. The dinosaur gets pretty up close, but doesn't attack you, which was great.

Top three experiences

In the next few hours, I tried some of what the Oculus Home store had to offer and these are my top three experiences.

Oculus Dreamdeck

The Oculus Dreamdeck, which is a few additional scenes from the end of setup, is a great introduction and a sample of the interesting locations and situations that VR can instantly teleport you into, and make you feel like a part of. For me, it simply shows that content creators now have a really open new paintbrush to use in order to create interesting and compelling experiences. Previously, they were limited to books, photos, movies. But what can you achieve with VR? I think it's very interesting to find out.

Lucky's Tale

Lucky's Tale. A cute, family-friendly, traditional platformer game in 3D. But made for VR.

It was absolutely cool how the world felt pretty small, because you were pretty up close to it. And you could look around just by moving your head... and looking around. This was a great sitting experience, played with a controller. VR allowed you to be a part of the world, rather than simply looking at it through a window in the form of your display.

The most telling aspect here was the next day, when I remembered the experience of playing Lucky's Tale differently. I remembered being actually inside the 3D world and seeing all its details in front of me. I did not remember sitting in a chair, or looking at a TV, or what my environment was when I was playing the game. I remembered actually being inside the game's world.

Live for Speed

Live for Speed. It's a very realistic and well made racing simulator. Its developers have prioritized getting top-notch VR support with the latest patch, and it was incredible.

The Oculus Rift currently suffers from not having a made-for-VR dedicated controller, so we have to use traditional input devices like keyboard, mouse, gamepad. There is no tracking of your hands, so when in VR, you typically feel as if you have no body. However, in LFS, the driver has a body, with his hands on the steering wheel and legs near pedals. It's pretty much where your body would be. When you put your headset on, you instantly end up being inside a car. It's probably the most realistic and believable experience. I could look around, look at the mirrors, look at the seats in the back, out the windows, etc., just by moving my head.

Social

One of the defining aspects of VR is its ability to instantly teleport you into a typically unattainable situation or environment. Want an entire stadium all to yourself? Or have an entire theater for your personal enjoyment? Or a sweet home theater setup? It's easy in VR.

If you want to get away from people temporarily, VR can probably simulate that really well. But it can also make you feel very lonely quickly, if the environment you're in is really cool but you have no one to share it with. That's why social aspects are going to be big and very important. It's no surprise Facebook bought it.

After trying it for the first few minutes, the first thing I wanted to do was to share this really cool experience with other people and tell them about it.

Content and possibilities of the medium

With the Oculus Rift, you get Oculus Home. It's impressive what they've been able to put together in such short time, but it clearly lacks behind the likes of Steam by a large margin.

Over time, there will be more content and more experiences, games, and tools created for VR. But it's going to take hard work and money (to pay for the time).

Still, it's very interesting to see what'll happen. I feel this is a whole new medium that is as exciting as the year TV became a reality. Imagine how you'd feel after seeing the very first movie in your life, knowing that directors can now share stories and come up with cool new experiences for that medium. Well, I can't wait to see what talented directors can do with VR, and what they'll be able to come up with when they control not just the vertical and horizontal pixels on a flat screen, but so many more of your senses.

As any medium, it can be used in good ways but also abused. It's no different than books in this regard.

It's also possible to ruin VR for yourself if you approach it adversarially. It's still quite basic and it's easy to push it to its limits and find ways in which it breaks, so if you want to prove you're not afraid of falling down a virtual ledge, it's not hard. The point is that if you can suspend your disbelief just a little, then the feeling of presence offered can be very compelling. That lets you be able to enjoy amazing experiences.

Closing thoughts

After trying the final consumer version, it's very clear to me how big this is and going to become. It's not a gimmick like the 3D vision goggles were 5-10 years ago. This is the real thing, finally happening now. It's a whole new medium that will unlock all kinds of possibilities that weren't possible before. It's a great time to be creative and artistic, because you can express yourself in whole new ways.

It will take some time, after all, the preorders are still only beginning to ship, but I expect over the next few years, more and more people will have tried and understood VR for themselves, resulting it in being adopted by consumers en masse, not too dissimilar from how most people have mobile devices now.

Eventually (some years later), as the technology improves and becomes more accessible and commonplace, and VR headset resolutions increase, the need for monitors will go away, and our work desks will start to look very different from now. They had paper on them first, then large CRT displays, replaced by LCD panels of today, and VR headsets in the future.

Special thanks to Sourcegraph for getting a Rift for our office, so that we can start thinking about and looking for more ways to bring the future sooner.

1 comments

Fireside chat with Dmitri Shuralyov, dev tools hacker

Dmitri Shuralyov (shurcooL) was one of the first users of Sourcegraph whom we didn't personally know. We've enjoyed meeting up with him over meals and seeing him at all of the Go meetups in San Francisco. He is the quintessential early adopter—and creator—of new kinds of development tools, such as Conception-go (a platform for dev tools experimentation) and gostatus (which lets you track changes to a whole GOPATH as if it's a single repository). He's bringing the Go philosophy to other languages, starting with markdownfmt (gofmt for Markdown).

We talked with Dmitri to learn about his sources of programming inspiration, and to hear about his current Go projects.

How and why did you start getting involved in open source programming?

I'm no stranger to using closed source software over the years. Some of it is really nice, and I enjoy using it. But as someone who writes code, I inevitably want to make it even better, fix bugs or contribute in other ways. But I can't do that unless the project is open source.

For all my personal projects, I always default to making them open source. Just because if someone out there likes it and wants to help out and make it better, I want to welcome that instead of denying them that opportunity.

In addition to that, I strongly believe in the benefits of code reuse, and open source libraries allow for that. When you have a specific task and there's one great library for it, you can use it. If you or someone else makes an improvement, everyone using that library benefits, and the improvement only needs to be done once.

What are the libraries and authors you have learned the most from (or who have most influenced your work)?

Without a doubt, the biggest influence and inspiration for my personal work has been Bret Victor. His talks and articles, especially Inventing on Principle and Learnable Programming, have helped me understand and convey what really drives me. I have a set of guiding principles that I believe would help make software development better, and my personal work is about finding out if that's the case.

On the programming side, one of the first non-trivial Go projects I wanted to make was to extend the Go-syntax printing of vars ala fmt.Printf("%#v", foo) but have it work recursively on nested structs, producing gofmt-ed output. Dave Collins (@davecgh) had created go-spew, which did the deep printing part, but it wasn't quite gofmt-ed Go-syntax. With his permission, I forked go-spew to create go-goon, and proceeded to make the necessary output formatting changes. At that time, I was just learning about Go's reflection, so having a high quality working solution was a tremendous help.

I've made a few PRs to gddo, the source of godoc.org. Every time I thought I had a decent solution, Gary Burd (garyburd) would provide some really helpful feedback and suggestions. The end result would be much better code and seem obvious in retrospect. I've definitely learned a lot from that experience.

Richard Musiol (@neelance) is incredibly interesting to follow on GitHub. I've "watched" the GopherJS project since its early days, and it's incredible the rate at which he's able to write high quality code that implements features, fixes bugs, and solves challenging tasks that are a part of compiling Go into JavaScript.

Gustavo Niemeyer (@gniemeyer) comes to mind as having some of the highest quality, nicely documented Go packages I've seen. He really sets a high bar in that regard. If you want to see how nice a Go package can be, definitely take a look at some of his work.

And of course the core members of the Go team that are largely responsible for most of the standard library. It's very often I end up peeking into the source of common funcs from std lib in order to confirm some detail or learn more about how they work. The Go standard library is an excellent learning material source in every sense.

What are you currently working on?

I've recently come to realize that the project I'm currently working is essentially my GOPATH workspace. It includes my own open source repos, but also various open source libraries that I rely on.


Conception-go

My main personal project continues to be Conception, now written completely in Go. Being a very ambitious undertaking, it still requires a lot of work both on the frontend and backend. Luckily, the backend code is shared between many of my other repos. For example, I use my github_flavored_markdown package to render GFM inside Conception. My gostatus tool, which recently gained support for displaying git stash info, uses the same internal code to figure out the status of Go package repos, and Go-Package-Store reuses that yet again to present a list of Go packages in your GOPATH that have updates, complete with the commit messages of what has changed.


markdownfmt: gofmt for Markdown

Lately, I've been working on improving the diff highlighting within github_flavored_markdown as well as inside Conception. Ideally, I want the same code to be doing the highlighting in both places, but the highlighting interfaces inside Conception need to be rewritten before that's possible. I've also just recently improved my GoPackageWorkingDiff() func to include new/untracked files.

I'm also working on adding support for displaying notifications to trayhost, which is needed for the InstantShare client. InstantShare sits in your menu bar and allows you to instantly upload images and get a URL. The reason there's no delay, even for large images, is because both the upload and download are streamed. The backend server part was written by my friend Pavel Bennett (@pavelbennett/pavben) in Go.

Now that we're out of go1.3 feature freeze, I really want to fix issue 5551 in go/format of the Go standard library. It has existed since go1.1, and it has caused me a lot of grief and blocked many things I wanted to build that would require it. I still have to continue to shell out to the gofmt binary, which performs poorly for large inputs and isn't available in all contexts.

What do you use Sourcegraph for?

I use Sourcegraph on various occasions to get answers to ad-hoc programming questions. There was a time where I was looking for Go code that implements a certain public interface. Another time I wanted to find example usages of the (url.Values).Encode() method.

Just recently, I was looking at Close bool in net/http.Request. I really wanted to find examples of how it was being used in the real world, and I was easily able to do that with the help of Sourcegraph. That's just not something I would've been able to do in any other way.

But I feel that the things Sourcegraph offers now is just the tip of the iceberg, a preview of things to come. If you look at what it's doing, it's essentially going through all the open source code out there and performing code analysis on it. Then it's making this connected graph of all the identifiers and types and so on, currently presented as a nice web interface with links that let you navigate code easily. As its doing that, it's gaining the ability to answer more and more questions about code that you may run into during your day. The kind of questions and answers that may help you learn how to do something new, or figure out what code will be affected if you change some public API, and help you get your task done just a little faster. That's pretty exciting if you ask me!

What do you hope will improve about the field of programming in the next 5 years?

I like that you've grounded the question by specifying 5 years. But that also makes it harder to answer, because it has to be more realistic, and I can't just say "flying cars and code that writes itself" yet. 😃

But you've also mentioned "hope", so I will be a little over-optimistic.

I hope new formats that are released follow suit of Go and come with standard formatter tools (like gofmt) from day one. Having a tool that formats your and everyone else's code on save to look the same and pretty is incredibly helpful, and retrofitting such tools later on is much harder (if not impossible) than doing it right away.

I hope we will have more widespread code refactoring and high level manipulation tools at our disposal. I do love my text editor for writing code and it's great, but sometimes I want to be able to easily rename identifiers, not perform find and replace with a manually crafted query. Or move packages seamlessly without having to manually move folders and update import paths. Tools like @bradfitz's goimports that automatically add/remove my Go imports are so transformative, it's hard to imagine going back to having to do that manually. I'd love to see more of similar innovation and improvement in this field.

But most importantly, I really hope we have some kind of breakthrough that would allow public APIs to change more often and more easily, without breaking packages that depend on the old version of the API. It's really a shame to see API improvements being purposefully held back due to concerns about backwards compatibility. There are some attempts to help with this, for example applying semver to APIs. But that still requires people to manually update to new library APIs. gofix is an inspiration here, and perhaps it can be used as a starting point in turning this into reality. Can we do better than the status quo? I'm looking forward to finding out!


Thanks to Dmitri for his contributions to the dev tools and Go communities and for sharing his inspirations and projects with us!

This interview was conducted by Quinn Slack (@sqs).

0 comments

How I use GOPATH with multiple workspaces

First off, I want to make it clear I have a fixed GOPATH that I do not change per project. The GOPATH env var is set inside my ~/.bash_profile file and doesn't change. Every Go package I have exists in no more than one place. I tend to have all my personal dependencies on latest version, simply because it's easier to have everything up to date than any alternative.

I do make use of the fact that the GOPATH environment variable is defined as a list of places rather than a single folder.

From http://golang.org/cmd/go/#hdr-GOPATH_environment_variable,

The GOPATH environment variable lists places to look for Go code. On Unix, the value is a colon-separated string. On Windows, the value is a semicolon-separated string. On Plan 9, the value is a list.

My GOPATH consists of 3 folders or GOPATH workspaces.

The first one is my landing workspace. Since it's listed first, whenever I go get any new package, it always ends up in this workspace.

Go searches each directory listed in GOPATH to find source code, but new packages are always downloaded into the first directory in the list.

I make it a rule to never do any development in there, so it's always completely safe to clean this folder whenever it gets too large (with Go packages I don't use). After all, it only has Go packages that I can get again with go get.

My second workspace is for all my personal Go packages and any other packages I may want to "favorite" or do some development on. I move things I use regularly from first workspace into second.

My third workspace is dedicated to the private Go packages from my work, and their dependencies. It's convenient to have my work packages separate from all my personal stuff, so they don't get in each other's way.

With that setup, multiple GOPATH workspaces feel a lot like namespaces. The reason I have more than one, to me, is quite similar why one might want to break a medium-sized Go package into several .go files. The result is effectively the same since multiple .go files share the same scope but allow one to have "namespaces".

Similarly, multiple GOPATH workspaces share the same scope (i.e., it's your "effective GOPATH") but allow you to have namespaces for categories of packages. Having 3 GOPATH workspaces with 100 packages each is no different than 1 GOPATH workspace with 300 packages. They don't overlap, similarly to how code in multiple .go files of a Go package doesn't overlap.

Conclusion

As long as multiple GOPATH workspaces are a supported feature, I don't see the motivation to actively force yourself to use only one GOPATH workspace. You should do whatever is optimal for your particular use case.

That said, I feel that having 2 GOPATH workspaces is nice just because it lets you easily "undo" go getting a package you no longer want to keep by having the first GOPATH workspace act as a temporary place. If I had to have just one GOPATH, I would feel very hesitant before doing go get on any new, unfamiliar Go package. What if it brings in 100 dependencies and I decide I don't want to keep it anymore? Undoing a go get currently is not straightforward... Unless you use 2 GOPATH workspaces and don't mind simply blasting the first one away.

3 comments

Open Questions in Software Development

As a kid, growing up, it seemed like math was a solved problem. It'd be very hard to invent something new, because so much has already been done by others. You'd have to get to a very high level, before you could go somewhere where no one has gone before.

In software development, it feels like the opposite situation. There are so many basic things that are currently unsolved and not possible. But they can be made possible with just some effort spent on it.

Here are some open questions that I can think off the top of my head (using the Go language for context):

I'll add more as I run into them.

0 comments

On understanding code

When understanding code, we build gigantic mental dependency graphs relevant to the task at hand. This requires much focus, time, memory.

— Dmitri Shuralyov (@shurcooL) February 14, 2013

Building these dependency graphs in your head from code is very intensive mentally, that's why programmers cannot take interruptions.

— Dmitri Shuralyov (@shurcooL) February 15, 2013

So when you're "in the zone" being very productive, it's because this graph is built. You know what will be affected when you change stuff.

— Dmitri Shuralyov (@shurcooL) February 15, 2013

The problem with that approach is you're storing valuable, hard to build information in your short term memory. Next morning it'll be gone.

— Dmitri Shuralyov (@shurcooL) February 15, 2013

If another person reads your code, they will not benefit from anything you've come up with in your mind. They will have to redo this work.

— Dmitri Shuralyov (@shurcooL) February 15, 2013

Optimize for how humans think and change things as a visual graph, not how computers used to store bytes 40 years ago as plain text files.

— Dmitri Shuralyov (@shurcooL) February 15, 2013

Code/functionality should be stored in a way that embeds this dependency information. This is what I'm trying to create with Conception.

— Dmitri Shuralyov (@shurcooL) February 15, 2013

So when you look at a completely unfamiliar line of code, nearby will be an exhaustive list of _all_ that will change/break as you edit it.

— Dmitri Shuralyov (@shurcooL) February 15, 2013

Sprinkle some live programming magic, and you have a system where it's very easy to experiment, test, see all relevant changes and undo. :D

— Dmitri Shuralyov (@shurcooL) February 15, 2013
0 comments

Two opposite paths to making life better

A friend shared the following article on Lifehacker about Otixo, a "Convenient File Manager for Dropbox, Google Drive, SkyDrive, and All Your Other Cloud Services," and it made me realize something.

http://lifehacker.com/5905784/otixo-is-a-convenient-file-manager-for-dropbox-google-drive-skydrive-and-all-your-other-cloud-services

It seems most products/services these days, and the accompanying articles describing them, follow a similar model:

"Create something that doesn't exist to solve some problem, or make some task easier."

In this specific case, they figured if a person has >1 cloud storage, managing them separately is hard so it'd make sense to create a unified interface that allows you to do that. And that's great!

However, I've found that recently (only within the last year or two, I think some time after I got my iPhone) I've started to look for another way to solve problems: by simplifying. I figure out what's essential to me, and try to throw everything else out.

So if I need to have cloud storage, there are 2 paths:

or

The advantage of path 1 is you get more space, but it involves more things. However, if u can get by with path 2, it involves less things, which I appreciate (more so than having that extra space, cuz right now 24 GB of free Dropbox is way beyond what I need).

My point is this: it seems my line of thinking (simplifying) is rather under-represented in terms of human efforts. How many people are working on products/services/writing articles about how to simplify, throw things out? Seems like very few.

If you want to improve your life and solve problems by simplifying, you gotta do it on your own. If you wanna solve problems by adding things, just go to lifehacker.com and such.

2 comments

Pursuit of Perfection

Apple is known as a company that strives for perfection in everything they do. They place a lot of attention on getting the smallest of details right. The little things that you rarely notice, not until you see someone else doing it wrong.

This attention to detail shows up everywhere. Take the power indicator on a MacBook for instance. It is meticulously programmed to light up in a solid white color only when the following two conditions are met:

  1. The MacBook is turned on
  2. Its LCD screen is not lit up

This ensures you can tell whether the computer is turned on at all times, yet the indicator light is never on when it would be redundant. After all, if the LCD is on, you will already know that the computer is on.

iOS is a fine example of high-quality Apple software. It's not always perfect, but the level of polish is above and beyond what you typically find in this very young but fast growing mobile industry. However, one of the side-effects of highly polished software is that when there are flaws in it, they become quite glaring in contrast to everything else that is done well.

There is one such flaw that I've found. Being able to tell the time on your phone is quite important, that's why it always takes its place front and center in the middle of the status bar.

When an iPhone is locked, time and date are displayed in a large font. To avoid displaying the time twice, needlessly, time within the status bar is replaced by a small lock icon.

The problem occurs when you get a call while your device is locked.

At this point, you lose the ability to find out the time until you respond to your phone call.

It's not a big deal, but I've found myself quite jarred by this a few times recently, when I wanted to find out the time before picking up the call.

Update: This has been fixed in iOS 6.0.

0 comments

The effect of motion blur on our perception of frame rate

I got my trusty CRT monitor capable of high refresh rates from the basement, to try my recent motion blur demo on it. It looked great on my 60 Hz LCD, so I had high expectations for 120 Hz.

I was shocked with the outcome. As expected, going from 60 to 120 Hz made a huge, noticeable difference in my demo with motion blur off. But turning it on, there was no significant difference. For the first time, I couldn't believe my eyes.

As a competitive gamer and someone very passionate about display technologies, I am very familiar with the concept of refresh rate. The more, the better, I always thought. 60 Hz is okay, but 120 is much better! I could always easily tell the difference. Until now.

After thinking about it, it made sense. This is probably the root of all "but the human eye can only see at most X frames per second" arguments on the internet. It matters what the frames are, or more precisely, how distinct they are. I can easily tell the difference in frame rate when they are very distinct, but I wasn't when motion blur made them less so.

You can experience the motion blur demo for yourself if you have a WebGL-capable browser.

The examples are abundant. Take movies. At 24 frames per second, the motion looks acceptably smooth. But take any video game running at 24 frames and it's unplayable. The difference is in motion blur. Each frame in film captures the light for roughly 1/24th of a second. In a video game, each frame is typically rendered as if with instant exposure time. For these type of frames, yes, the higher the frame rate, the smoother the motion appears.

I am planning to confirm my findings on a 120 Hz LCD monitor soon, but I am certain that the results will be similar.

3 comments

How to make your networked game smooth as butter

"By avoiding a future arrow from hitting your present self, you ensure your past self will dodge the present arrow."

Suppose you are designing a networking system for a multiplayer game. One of the biggest factors in the quality of networking design is how smooth the game feels for its players. People are not happy when they see snapping, jittering or other unexpected non-smooth behaviours. Such things occur because of the latency over the internet, and games have to try to predict things that are unpredictable by nature (e.g. player movement). Each time the prediction is wrong, the user sees an unpleasant correction.

One way to reduce such negative effects is to try to mask the corrections, by using smoothing.

I propose another approach. We can try to rely less on information that is uncertain, and rely on what is known. The latency is still there, but we can work around it.

Imagine an internet game server with multiple players, where everyone has less than 100 ms round-trip latency (and under 50 ms single-way latency). Those are good internet conditions, but not too unrealistic for today.

That means each player can know exactly all other players have been 100 ms ago, without having to use prediction. Let's render them there. The local player still sees himself at present time, but he sees all other players 100 ms in the past, 100% smoothly (ignoring dropped packets).

We want to let players aim at enemies, and not where they think enemies are due to their current latency (ala Quake). So the server performs hit collisions with regard to what the player shooting saw. Bullets travel in present time, but they collide with players' positions 100 ms in the past.

All the above has been done before (see Half-Life 1). But here's the kicker of this post.

What if you try to dodge a bullet by doing something within 100 ms of it hitting you. With the above system, you physically can't. No action, however drastic, within 100 ms of the bullet hitting you can save you, since the bullet will still hit where you were 100 ms ago (before you took said action).

It's not that big a deal for bullets, since they travel quite fast so there's little you can do in 100 ms to change anything. We can accept that as is. But can we do better?

What if, instead of a bullet, we have a slower moving projectile like an arrow. The player might want to jump out of the harm's way just before it hits them, just to spite their opponent. With the above system, they will see themselves clearing the arrow, yet it still hits their 100 ms in the past alter-ego and the player quits in frustration.

Yes, we can do better. We take advantage of the principle that player movement is not easily predictable, but arrow projectiles are (all you need are the initial conditions). That's why we render other players 100 ms in the past, but we can try to render other player's projectiles 100 ms in the future.

Now, when you see an arrow just about to hit your face, and you duck at the last millisecond, you're actually ducking 100 ms before the real arrow will have hit you. By avoiding a future arrow from hitting your present self, you ensure your past self will dodge the present arrow. :D

Additional Notes

7 comments

A Subway Ride

You are riding the subway. It's a regular train car. It feels familiar. There aren't many people. Some are reading newspapers. A woman takes a sip from her coffee cup. An older woman in a wheelchair waits patiently for her stop. The train moves silently, as if through a thick layer of fog.

It makes a stop and a male gets on.

The train continues its monotone passage through a moonless night. It's smooth sailing under the stars, except they're not visible from within the tunnel.

A man in the seat ahead talks out loud, either to himself or into a headset. More likely the former.

People continue to mind their own business, each one almost completely oblivious to the presence of others.

An empty coffee cup rocks back and forth under the seat in front, caressed gently by the soft ride.

Another stop. The next station is Eglinton. They become more frequent, each one seemingly marking a period at the end of a sentence in the lifetime story of the train.

Three girls chatter happily at the end of the train car. Some laughter is heard from the other end. As I near my destination, my heart rate goes up ever so slightly. The smell of excitement fills the car.

The night is young. Whatever adventures await ahead are still unwritten.

1 comments

Versionless Software

Suppose you have an ideal piece of software at version X with one bug in it. When the developer fixes the bug in version Y, you want to have that bug fixed too.

Without automatic updates, this means you have to manually update to Y. This has some cognitive effort associated with it. When multiplied by hundreds of different pieces of software, it is quite significant.

Wouldn't it be better to have it automatically update without the end user having to worry about or even know anything about versions? Yes and no:

I think the reasons for the 'no' answer can be, in theory, avoided, leaving yes and therefore making it possible to reduce cognitive load. To make our lives simpler by not having to regularly update software or deal with a major update from a very old version.

0 comments

Online non-simulations

I think there is a lot of untapped potential for online games. I mean something else altogether.

Most online games right now are perfect simulations, or rather they attempt to be. Most online games imply multiplayer. Each player connected to the same server will be a part of the same virtual world. What player A sees going on is very close to what player B sees (albeit from a different perspective).

Let's break that constraint. Suppose there is an online multiplayer game where two players are connected to the same server, yet they live and interact in two completely different realities. Think about the doors this opens. The opportunities are endless. However, not many of them would make fun games. But I believe there may be something worthwhile out there. We just have to explore this direction a bit.

There already are some examples of this situation happening. Take a RTS game that is peer-to-peer. Imagine for some reason the two players go out of sync. At this point, they may both think they are winning the battle. There will be two winners and no losers, and each may have a lot of fun... until they realize their games are not in sync (via talking), where that fun will quickly end and frustration will set in.

But that's only an example of a broken simulation. The multiplayer RTS game was designed to be in sync, not out.

Let's look at online games that were designed to be not in sync.

I remember in a multiplayer FPS America's Army, there were 2 teams of players fighting each other. The game was made so that whichever team you picked, it would appear as the "good guys" and the other team is the "terrorists." This is a very minor visual aspect that has no gameplay value whatsoever. Maybe it wasn't even worth mentioning. ;)

As all good writers do, I'm leaving the best example for the last. There is a short simple indie game called 4 Minutes and 33 Seconds of Uniqueness. Assuming you don't mind the following spoiler, the game objective is simply to be the only player in the world playing it for 4 minutes and 33 seconds. It's a simple as that. But it is a great example of a game with online capabilities that does not attempt to simulate the same reality for multiple players at the same time.

What else is possible out there? I hope we are yet to find out.

0 comments

The most boring post

There's a conflict of interest. I've just realized this, but I think I see what it is that makes some stories or people interesting to hear and some not.

It's a conflict of interest. When you are the one telling the story, a story you're excited about and want to convey it in the most interesting way possible. You don't want to give away any spoilers that will ruin it. So you take your time and explain everything in detail. Because that's how you would've liked to hear it, or so you imagine. The only reason you wanna hear it that way is because you already know everything, and now you're just going over it the second time.

Other people, on the other hand, have absolutely no idea what your story will be about. They want to know right away. You could tell them in one sentence, but you wouldn't feel satisfied with that. You want to paint the story in its full colours and detail. But other people want to know it as soon as possible. They would love to hear all the details if they already knew the big picture.

I guess the problem is that communication using words takes time; it's not instantaneous. If it were, we'd have no problems.

Personally, I don't even like the idea of movie or book titles, because they give away some of the surprise for me. Yet I want to know in advance if the movie or book will be interesting to experience. It's a conflict of interest.

I'm sure I could've worded this exact idea in just a few sentences, but I haven't. Why? I don't know, it's hard, and it's something I'll have to think about more and hopefully work on.

0 comments

In my defence, or why I don't care

This post is way overdue, and it is a perfect example of what I'm about to talk about.

I'm not very good at English writing or clearly and consicely expressing my thoughs in words. Yes, here I am talking about my weaknesses, and you're thinking why should you even continue reading what this guy has to say.

But it's ok, because I don't care. In a good way.

See, I have this theory that allows me to think that it's ok for people to be wrong or bad at something. At least, it shouldn't stop them from working on what they love (as long as it doesn't negatively impact others).

The reason for that is since there are many people, there's nothing wrong with them fanning out and trying to work in different directions (think of a non-deterministic Turing machine). Some will be right, some wrong. The alternative is catastrophic: groupthink. If everyone were to try ony one approach, then everyone is simply screwed if the group's consensus is wrong.

Consider the following example. At some point, people thought the Earth was flat. If not for the person/people who dared to go against the flow, perhaps we would never have discovered otherwise.

Hence, I believe that no matter how wrong or stupid something I'm doing may seem to others, I have perfect grounds to stick with it if I believe it to be the right thing.

Of course, this doesn't mean I will ignore all constructive critisim. On the contrary, I will happily consider all such feedback, and try to adjust if I see the error of my ways.

As far as my posts go, basically, I have two choices. Either care a lot about my spelling/grammar/presentation, spend too much time editing my posts until I believe they are ready for mass (lol) viewing, and post rarely. Or, not pay much attention to that stuff, but rather post things as they come to my mind with little editing or rewriting. This will allow me to actually use this blog the way it's meant to be. I hereby choose the latter.

0 comments

My interests

There are a bunch of topics that are of high interest to me, and I'll be using this blog to talk about some of them.

My main passion and a dream job is no doubt game development. This will likely be the main subject of this blog, as well as other computer related topics. However, I'm also into things like psychology, philosophy, HCI/User Interfaces, physics, racing, driving, snowboarding, longboarding, RC cars, racing simulators, displays, pixels, portable devices, in arbitrary order.

I like cars in general, and anything that has to do with driving, racing or making them. One of my long term dreams has always been to create a playground realistic driving sim where I'm able to experiment with various car designs. Until then, I have LFS. I also tend to play around with some remote controlled cars that I have.

I also like to delve into contemplations about the meaning of life, and other philosophy/psychology questions. I might post some of my ideas on that here.

Anyway, I doubt I will talk about every single of my interests here, so perhaps throwing all those tags out wasn't the best idea. Oh well.

0 comments

First post

So I've finally decided to create a blog. I will be posting things related to my various interests here.

I know I'm kinda throwing this out into a void here, but that's ok for now. It might a good outlet for me to get my ideas out and to think about them as I do. After all, it can't hurt, can it?

0 comments