June 23, 2013 0

Versioning Dotfiles with Git

By in git

It’s become quite common to see ‘dotfiles’ repositories across GitHub, and for good reason. Versioning your home directory is a great way to maintain backups of your dotfiles, share with others, learn from others, and makes configuring a new machine a bit easier. Many users, however, don’t actually version their home directory. A common approach is to have a dotfiles repo somewhere (let’s say ~/dotfiles) and use a number of different ways to get the actual dot-files into the home directory itself: manually copy, manually symlink, or scripting the copy/symlink.

I’m not a fan of the copy approach. Any changes made to the actual dotfiles must to be manually replicated in the repo. I’m also not a fan of the symlink approach. It solves the problem of modifications but doesn’t account for new files. So why don’t we version the home directory itself?

I’m sure there are some good reasons not to version the home directory, and I’d love to hear them. I can’t think of concrete examples where having a ~/.git directory is a bad idea, but I’m sure there are a few. Regardless, here’s my current setup.

My Setup

My dotfiles repo is cloned under ~/dotfiles. Git has a setting (core.worktree) that configures where the working tree should be checked out. It expects a path that can be either absolute or relative to the ‘.git’ directory. To accomplish this, from my home directory:

$ git clone --no-checkout https://github.com/jasonkarns/dotfiles.git
$ git config core.worktree="../../"
$ git checkout master

This will do a normal git checkout, but instead of checking out the master branch to ‘~/dotfiles’, it checks out to the home directory itself. The repo itself (‘~/dotfiles’) is completely empty, except for the ‘.git’ directory.

Benefits

  1. Any changes to my dotfiles are known to git
  2. New files are known to git
  3. The home directory itself is not a git repo, yet it’s fully versioned.
    • git commands cannot be run under ~
    • git status is not displayed in my command prompt ($PS1) under ~
  4. To manage the dotfiles repo and run git commands, I must be in ‘~/dotfiles’. (I like the forced context switch.)

This setup has worked well for me. I have the benefits of a versioned home directory, without the annoyance of it being an actual repo (like seeing git status info in my command prompt).

Complications

There is one major complication: submodules. I use Vundle to manage my Vim plugins. Vundle itself is a git submodule under ‘dotfiles/.vim/bundle’. In order to run git submodule update, git requires that I be in the repo (duh) but also that I be in the working tree root. Since my working tree root is not in the repo, I get an error:

$ git submodule update
fatal: Not a git repository (or any of the parent directories): .git

To get around this, git has a feature wherein a plaintext file named ‘.git’ is placed in the root of the working tree. It contains just a single line: gitdir: /path/to/actual/repo/.git. While this file exists, the home directory itself becomes, for all intents and purposes, a proper git repo. I can run git commands directly from the home directory and even my command prompt picks up the git status info.

So, with ‘~/.git’ containing gitdir: /Users/<my_user>/dotfiles.git, I am able to properly run git submodule update and everything works! Of course, as long as this ‘.git’ file exists, my home directory is essentially a proper git repo, so once I’ve run any necessary commands, I simply delete the ‘.git’ file, and now I’m back to a plain, non-repo home directory!

 
 

Tags:

June 9, 2013 0

Even More Better-er In-Browser Mockups with Lineman.js

By in frontend

Issue #374 of A List Apart by Garann Means is a great two-part piece on using Node.js for in-browser mockups and prototypes. In “Even Better In-Browser Mockups with Node.js” she gives a great overview for why using Node for mockups/prototypes is such a great idea. In part two—”Node At Work: A Walkthrough“—she dispenses with the theory and shows hands on how to build a mockup that you can use, using Node.js. I strongly recommend you read both articles.

Using Node.js for mockups provides quite a few key benefits, not the least of which being:

  1. Mockup built using web technologies, not Photoshop comps
  2. It works! (users/stakeholders can get their hands on it)
  3. Rapid development

But there are a few drawbacks to vanilla Node/express.js:

  1. No CSS pre-processors: SASS, LESS or Stylus
  2. No JavaScript pre-compilation languages: CoffeeScript, TypeScript
  3. Hand-wiring templating engines
  4. What about client-side frameworks: Backbone, Angular, or Ember?
  5. Node’s HTTP server (express) can be a pain
  6. Restarting the server on every file change? Blech!

Lineman.js can help!

Lineman.js

Lineman is a tool to help you build fat-client webapp projects. Mockups and Prototypes certainly fit that bill! In fact, when working on mockups and prototypes, generally speed of deployment and rapid development is a key factor. Lineman can help you scaffold out a mockup and get you up and running fast!

In addition to all the benefits that Garann discussed for why using Node is a good idea for mockups, Lineman also provides the ability to use CSS preprocessors, compile-to-JavaScript languages, and choose your own templating tool. (Lineman ships with support for LESS and SASS CSS pre-processors, Underscore and Handlebars template engines, and CoffeeScript compilation. And it’s easy to add support for your tooling of choice e.g. Stylus, TypeScript, Mustache)

Lineman eases the pain of integrating with a server-side API. You can simply stub out the API endpoints in Lineman’s config/server.js configuration file (which makes a great deliverable to the server-side team when it comes time to build out the real server endpoint). Or, if the server-side API already exists, you can tell Lineman to just proxy requests to the real API.

Further, to increase the pace of development and feedback loop, let Lineman handle recompiling your assets and restarting the server for you! And lastly, if you plan on using one of the many great client-side frameworks out there (like Backbone, Angular or Ember), Lineman has project templates ready to get you going.

“Cool Art Store” Demo, ported to Lineman

I created a fork of Garann’s original “Cool Art Store” demo project to show how easy it is to use Lineman for simple mockups. I isolated the Lineman port to a branch named ‘lineman’.

What was necessary to get Garann’s Cool Art Store demo running on Lineman?

  1. Ran $ lineman new coolartstore to get the base Lineman archetype project. (much easier than starting with a blank canvas)
  2. Moved the css, js, image, and template assets from public/ to app/. (Not strictly necessary – Lineman can be configured to look for assets anywhere you wish by using config/files.js or config/files.coffee)
  3. Converted the templates from doT.js to Handlebars (Also, not technically necessary – there are a number of doT-precompile grunt tasks, so configuring Lineman to use doT is also an option; I just hapen to prefer Handlebars.)
  4. Stripped out the XHR requests for the doT templates; Lineman pre-compiles your templates into a single JS object so you need only execute the template at runtime.
  5. Ported server.js to use Lineman’s config/server.js for defining the API endpoint stubs. This mostly boiled down to removing all the express.js setup code. (Lineman makes it easy to just define the endpoints and not deal with boilerplate.)
  6. Bonus: since Lineman runs the JavaScript through JSHint, it caught a couple of lint-warnings that were promptly fixed.
  7. Bonus: Lineman concatenates and minifies the final CSS/JS assets. If this weren’t just a mockup, we’d be one step closer to shipping!
  8. Bonus: using lineman run meant that I didn’t have to keep restarting the node server after each file modification!

Not just for Mockups

Of course, Lineman is really tailored for developing fat-client webapps for production, not just mockups in particular. So TDD your client-side application using your JavaScript test framework of choice and let Lineman run the specs/tests for you! Or integrate your test suite into your Continuous Integration build. And let Lineman help with deployment to Heroku as well!

 
 

Tags: ,

June 5, 2013 0

Supporting TypeScript in Lineman.js

By in javascript

Lineman is a great tool for building client-side applications. Out of the box, it supports CoffeeScript, LESS, SASS, and (of course) JavaScript and CSS. But because Lineman is built on Grunt, it is easy to add any grunt task to the Lineman toolchain. Today, let’s see how we would add TypeScript support to a Lineman project.

Here’s a rundown of the necessary steps:

  1. Add the typescript grunt task to your project
  2. Configure Lineman to load the grunt-typescript task and provide configuration for the task itself
  3. Add a TypeScript file to be compiled
  4. Add the TypeScript task to Lineman’s build/run processes
  5. Add typescript-generated javascript as a source for the concat task

Let’s start with a fresh Lineman project.

$ lineman new ts-test
$ cd ts-test

Add Dependency

Add the grunt-typescript NPM module as a dependency in your project by adding it to package.json.

"dependencies": {
  "lineman": "~0.7.1",
  "grunt-typescript": "0.1.6"
},

Running npm install should leave you with grunt-typescript and lineman under node_modules. At this point, the grunt-typescript task is available to Lineman, but isn’t being loaded, so Grunt isn’t aware of it. Verify this by running lineman grunt typescript. You should get:

Warning: Task "typescript" not found. Use --force to continue.
Aborted due to warnings.

Load the Task

Configure Lineman to load the grunt-typescript NPM module by adding it to the loadNpmTasks array in config/application.js. Lineman will invoke Grunt’s loadNpmTasks method on each task in this array. We must also provide the task with its associated config block. This should be the same configuration block that would normally be added to Gruntfile.js.

loadNpmTasks: ['grunt-typescript'],

typescript: {
  compile: {
    src: 'app/js/**/*.ts',
    dest: 'generated/js/app.ts.js'
  }
}

Now running lineman grunt typescript should give you:

Running "configure" task

Running "typescript:compile" (typescript) task
js: 0 files, map: 0 files, declaration: 0 files

Done, without errors.

Add some TypeScript

Create a new file app/js/greeter.ts. This will be the TypeScript source that is compiled into JavaScript. If you remember our typescript config block specifies the source as ‘app/js/**/*.ts’, so any filename with a ‘.ts’ extension (within any subdirectory) will be compiled into the same JavaScript file. (For more info, refer to grunt-typescript‘s configuration.)

function Greeter(greeting: string) {
  this.greeting = greeting;
}

Now running lineman grunt typescript should give you:

Running "configure" task

Running "typescript:compile" (typescript) task
File ts/generated/js/app.ts.js created.
js: 1 file, map: 0 files, declaration: 0 files

Done, without errors.

Now we’ve got the task working independently from the Lineman build process. For any arbitrary grunt task, this would be sufficient; simply run lineman grunt <task>. However, if we run lineman build, the typescript task will not be run. Let’s add it to the Lineman build process.

Compile during Lineman build

You can add grunt tasks to either the beginning or the end of the Lineman build process using prependTasks or appendTasks, respectively. Since we want the generated JavaScript that the TypeScript compiler emits to also be concatenated and minified with the rest of the app, we need the typescript task to run before the other Lineman tasks. We also want this task to run in both dev and dist phases, so we’ll add the typescript task to prependTasks:common array in config/application.js.

prependTasks: {
  common: ['typescript']
},

Now we’ve got the task running as part of the Lineman build process. However, the typescript-generated javascript is not being concatenated/minified with the rest of our application. We need to add the typescript-generated javascript to the concat task’s source list.

Concatenate/Minify

There are a number of different ways to do this. One of which, is to simply modify the concat-task configuration block in config/application.js.

// save off the current config object instead of directly exposing it via module.exports
var config = require(process.env['LINEMAN_MAIN']).config.extend('application', {
... // this remains untouched
}

// add the typescript-generated js as a concat source
config.concat.js.src.push('generated/js/app.ts.js');

// export the modified config
module.exports = config;

Now when we run lineman build our Greeter function is compiled from TypeScript into JavaScript, concatenated along with the rest of the application JavaScript (‘generated/js/app.js’) and minified (‘dist/js/app.min.js’)!

 
 

Tags: , , , ,

June 3, 2013 0

Ender’s Game

By in Uncategorized

This is clearly not programming related, but with the Ender’s Game movie opening in November, I know quite a few people who are digging into Ender’s universe for the first time. I’ve been asked (more than once!) to sort out the maze of books and short stories into some kind of order. I’m no expert and I won’t go into the same level of detail as The Machete Order did for Star Wars, but here’s my recommended reading order for the Ender’s Game series, including short stories:

  1. Ender’s Game
  1. Ender’s Homecoming
  2. Young Man with Prospects
  3. Ender in Flight
  4. Gold Bug
  1. The Polish Boy
  2. Teacher’s Pest
  1. Mazer in Prison
  2. Pretty Boy
  3. Cheater
  1. Ender’s Stocking
  2. A War of Gifts
  1. Investment Counselor
  2. Speaker for the Dead
  3. Xenocide
  4. Children of the Mind
  1. Ender’s Shadow
  2. Shadow of the Hegemon
  3. Shadow Puppets
  4. Shadow of the Giant
  5. Shadows in Flight
  1. Ender in Exile
  1. Earth Unaware
  2. Earth Afire

Rationale

(mostly spoiler-free)

All things being equal, a publication-order reading of the Ender series is recommended. I find this order better because the many connections and references between various short stories and books are too subtle to be appreciated unless the fuller story has already been told. However, there are a few changes to this order that I recommend.

The chronological (and topical) gap between Ender’s Game and Speaker for the Dead is quite significant. It is the perfect chance to remain in the era of the Battle School before continuing on with Ender’s storyline. Further, the Shadow series is too tightly woven to be broken apart, even though some of the short stories might fit a bit better chronologically. With this decision made, we are now forced to choose between the book Ender in Exile and its four constituent stores “Ender’s Homecoming”, “Young Man with Prospects”, “Ender in Flight”, and “Gold Bug”. Since the book makes a few additional connections with the Shadow series, I recommend postponing Ender in Exile until after the complete Shadow series. This leaves us with the four short stories immediately following Ender’s Game. These stories provide just enough time for the Ender story to settle a bit before circling back on life before the Battle School with short stories “The Polish Boy” and “Teacher’s Pest”. Then we can segue back to the Battle School with “Mazer in Prison”, “Pretty Boy”, and “Cheater”.

At this point, I recommend A War of Gifts which occurs during Ender’s time at Battle School. “Ender’s Stocking” can be skipped as it is essentially Chapter 2 of A War of Gifts.

Immediately before you slide back into the Ender series with Speaker for the Dead, I strongly recommend “Investment Counselor” (not just before; immediately before). Then, after Speaker for the Dead, continue on with Xenocide and Children of the Mind.

Now the Ender saga is complete. (with the exception of Ender in Exile) Take a few weeks off and then dive into the Shadow series: Ender’s Shadow, Shadow of the Hegemon, Shadow Puppets, Shadow of the Giant and Shadows in Flight.

Cap the whole thing off with Ender in Exile. Chronologically, it dives right back into the middle of everything, but I feel it’s much better to postpone this book until all of the character references are clear.

The final trilogy is still being written, but Earth Unaware is already available and Earth Afire should be available by the time you read this.

 
 
June 14, 2012 0

Shell Apps and Silver Bullets – A Rebuttal

By in mobile

I don’t want to get into the entire Web vs Native debate. However, a post by @sandofsky against shell apps (like PhoneGap) misses the mark on many of its arguments. I suggest you read the original post.

First I would like to say, in my opinion, the best use cases for shell apps are for apps that are primarily web apps or began life as pure web apps. I tend to lump shell apps and web apps together because I think a shell app ought to simply be a web app that’s simply enhanced with some additional device APIs. Any serious amount of device API usage and you’re probably better off going native. (Which probably doesn’t put me too far off from the author.) I don’t believe anyone is actually selling shell apps as a complete replacement for native apps. Or at least I’m not. I tend to believe native apps will eventually just die out on their own…

But on to the article…

The Framework Tax

If the native platform introduces a new API, or you run into an edge case that requires extending the shell framework, it could be months before you can implement your own app’s functionality.

The underlying pain point of the framework tax is if/when new APIs become available on a device, you’re stuck waiting for the shell framework to catch up. This is a legitimate concern but is being used as a straw man argument. Shell apps are aimed at people trying to create apps across multiple different devices. In which case, a single platform’s latest API is not something you can develop for anyway. If you’re creating a consistent experience across platforms, you have to wait for iOS to catch up to Android’s latest features and vice versa.

Browser Fragmentation

When developing a native iPhone app, the development cycle is: write a little code, run it in the simulator, and repeat.

The article only mentions the iPhone. If you’re only targeting iOS, then fragmentation is not a problem (market-share is). But as I’ve already stated (and will continue to point out), PhoneGap is for those who want cross-platform. If you’re targeting a broad market, then fragmentation is a given. In fact, targeting the browser actually reduces fragmentation because it has a more consistent API across devices than the native platforms do.

Shell apps require you write a little code, run it on an iPhone simulator, an Android simulator, a Windows Phone 7 simulator, et. al.

Once again, if you’re creating an app for each device, you’ll need as many simulators as platforms. This is pretty much a wash.

Versioning

Shell app let you update content without requiring a full app release by serving your pages off a server. In the process, you lose release atomicity, the assurance that whatever you ship to clients comes in one complete, unchanging bundle.

So the argument against versioning is: Can’t change the web portion of the app separately from the shell portion of the app?

Solution: Don’t have a ‘shell’ portion. If you’re using shell apps, the app should be just the web portion. If you need that much shell stuff that you actually have shell ‘chrome’, then stick with pure native. At that point, you’re no longer a web app.

Honestly, I’m surprised this is even listed as a con for shell apps. Versioning is one of the biggest wins for shell apps. It’s one of the biggest reasons that the web as a platform is so awesome. Immediate client updates. Period. No software delivery. No need to support old clients or deal with versioning of services because someone is using a 4 year old client and still hitting your server. If your app is a web app, then there is no such thing as an out-of-date client.

And speaking of versioning services… If your native app is still hitting your own managed services (and at this point, how many interesting native apps don’t hit the cloud in some way?), then you’ve just forced yourself to maintain versioned services.

The Uncanny Valley

Evaluated against native apps, shell apps have inconsistencies that make them feel wrong, even if the user can’t articulate the problem.

No argument here; that’s spot on. This is where the web platform needs a lot of work. I will, however, note that I don’t think web apps should try to look and feel native. Again, one of their biggest strengths is being cross-platform. Having a consistent look and feel whether you’re on an Android phone, iPad or Win8 tablet is something that shouldn’t be overlooked.

Performance

Maybe web technology will one day be as fast as native code.

This is the same argument that says Java is always slower than C and JavaScript will never be faster than . While performance is a huge problem for web-based apps, you can’t dismiss it out of hand. Many times you actually get performance improvements by running on a VM. (hotpsot optimization, JIT).

Of course, this partially goes in hand with using UI libraries in order to look native. If you drop that hacks that try to emulate native feel, most of your performance problems disappear.

Additionally, native apps have the same network bottleneck as web apps when they need to hit services, so that’s a wash. And web apps have numerous faculties for caching; they just require some experience to get right.

Fast Release Cycle

The time spent designing, developing, and verifying functionality dwarfs the time spent in an app store review process.

No, the app store review process doesn’t impact your development cycle if you only release once a month. But if you’re running continuous deployment (a desirable if lofty goal), then releasing an app update multiple times a day is ridiculous. However, releasing a web update hourly is perfectly fine. Bug fixes? hours, not weeks. Try A/B testing? Get instant feedback. And don’t overlook the annoyance factor that your users encounter when they get an app update every week.

Write Once, Run Anywhere

Fifteen years ago Java promised cross platform client development. […] Count the number of apps in your dock running Java.

So the web platform must be a failure because Java failed? Sorry, that doesn’t cut it. The web platform is the closest thing to single deployment that’s ever existed. This is primarily due to its ability to tolerate broken-ness and degrade gracefully. Yet how many versions of native apps would you have to develop to support the various API/device feature flavors of Android? And that’s just one platform!

HTML5 Is Easier

Every language has warts. […] Today, for high level work, I am as productive using Objective-C as JavaScript.

Yes every language has warts. Stick to your strength. BUT: How many developers know Java and Objective-C and .NET. How many can develop across all these platforms simultaneously? Once again, if you only care about one platform, then go native. If you want reach, one single platform is definitely easier.

It’s harder to build an app with web technology. The web began as a document format, with an interactive layer added later.

I think the “web as a document platform” is old-hat at this point. JavaScript is a first class language. The new and coming HTML5 APIs offer a compelling feature list in a very malleable and powerful language.

And again, the layer-ability of web technologies is an incredible asset. How many mash-ups exist between Java applets, Flash sites, or Windows desktop apps? Going with the web platform opens up so many possibilities. Take advantage of the ability of your app to spawn other apps, which may even improve your own!

Silver Bullets

No, there are no silver bullets. And I agree that there are many, many strong use cases for native apps. But the problem with this article is that so few of the arguments put forth are actually relevant. You shouldn’t be deciding to go native or go web based on the Framework Tax, Performance, Versioning, Fragmentation, or anything else. Start with what you want the app to do. Go native, if you must. But think long and hard about the benefits of using the universal platform. Think about the device upgrades in the next 6 months. Will your native app work out of the box on the latest OS? What about the next wave of devices the following year (refrigerators, car systems, washing machines, kiosks). What device APIs will they have? What features will they have? No one knows. But one thing you can bet on: they will have web browsers.

 
 

Tags: ,

April 19, 2012 0

LOAD ALL THE RUBIES!!!

By in ruby

Dangerous Cucumber Loading Issue

I recently discovered a potentially dangerous issue with how cucumber loads ruby files. The standard cucumber project expects a features directory in which to place your .feature files. Standard practice is to place supporting ruby files in features/support and to place step definitions in features/step_definitions, but that’s just convention. Cucumber recursively loads all .rb files under features/ (ensuring only that features/support/env.rb is loaded first). This is all well and good, so long as you actually have a features directory (and it’s in the root of your project). So what happens if you name your features directory something else or your features directory is in a sub-directory? If you remember to tell cucumber which directory to load from, everything is fine (`cucumber alternative_folder/`). If you forget, however, cucumber will dutifully load (recursively) every ruby file it can find!

format_hard_drive.rb

This is still not much of a problem, assuming all your ruby files are just plain old ruby classes or benign ruby scripts. The problem really rears it’s head if, say, you have a utility_scripts directory full of potentially dangerous scripts for setting up project directories, manipulating test databases, or pretty much anything. (Who else has an `rm -rf /` ruby script?)

Cucumber - Load All The Things!!!

Lesson

Name your features directory features.

 
 

Tags: ,

April 17, 2012 0

Fetch and Iterate

By in javascript

Tonight I stumbled across a blog post from the future! Or something like that. On 4/17 at 11:30PM I read a blog post that was published on 4/18 at 1:54. Time zones = time travel. Anyway…

It was about finding a cleaner, more succinct, syntax for a standard pattern:
fetch_data, process_data, rinse and repeat until no more data!

Quick, go read the original post. Don’t worry, it’s short.

{elevator music}

Okay, so how do we improve this? Well, in some languages (like JavaScript, Ruby, PHP, et. al), an assignment expression actually returns a value. And since the expression returns a value, the expression itself can be used as the looping condition, like so:

while(data = fetch()){
  process(data);
}​

Notice the single equals sign? We are doing assignment here, not an equality check. We invoke fetch() and the result is returned and assigned to data. Now the value of data itself is used as the while condition. So the data is continually fetched and processed until fetch() returns a falsy value. (In JavaScript, that would be one of false, undefined, null, 0, "", or NaN)

This is a very common pattern in PHP, especially when fetching/querying data from a database. It’s a much less common pattern in JavaScript, but still quite valid. As is with most useful things, this approach can be misused. It can easily be confused for a typo (missing an equals sign?) so it should be used with care. For instance, ensuring that your fetch() function and data variable are named clearly should make it less likely to be misunderstood. When used appropriately, it can certainly increase the clarity of your code.

 
 

Tags: , , ,

April 3, 2012 1

ANSI color in Windows shells

By in shell

Having used git on windows for over three years now, I’ve fallen back in love with the command line. Bash, of course, not the windows command prompt. Beautiful, ubiquitous, warty bash. Git depends heavily on GNU utilities so on Windows it requires either cygwin or msys. Having been burned by cygwin in the past, I prefer the minimalism and simplicity of msys + mingw. Along with git, the entire ruby ecosystem lives in the shell. However, the numerous tools, gems and utilities that assume standard ANSI color support in the shell began to wear on me. Lo and behold, there is a lovely solution to provide ansi color support for bash (and cmd) on Windows: ansicon.

Download the zip and extract. There are a few ways to install:

  1. Extract to a permanent location (I use C:/bin/ansicon). Execute ansicon.exe -i from within the appropriate directory for your system (x86/x64), and you’re all set. Any new shells (bash and windows cmd included) will autorun the ansicon utility for displaying color output. Be sure not to move the executable prior to running ansicon.exe –u. This removes the registry entry and prevents an ugly error message for every command shell.
  2. Alternatively, place the ansicon executable in your PATH, or add its location to your PATH. Then you can launch ansicon for a session with ansicon.exe –p.

This utility has been working great for my on Windows XP. I’ve been having trouble getting it to work on Windows 7, but I hear it should be supported. I’ll post an update when the Windows 7 issue is resolved.

Update

Root cause, uncovered! If you use JRuby with a 64-bit JVM on Windows x64, ansicon won’t work. The issue is that ansicon (64-bit) is capable of injecting into 32-bit processes, but not vice versa. Currently, the JRuby launcher is a 32-bit executable. Thus, if you’re running a 64-bit shell (cmd, bash, or otherwise), ansicon will inject correctly into that process. It will then inject successfully into the 32-bit JRuby launcher process. At this point, for all intents and purposes, you’re running the 32-bit version of ansicon. Thus, if you’re running JRuby on a 64-bit JVM, then 32-bit ansicon is not able to inject into 64-bit JVM. There is an open feature request for JRuby to ship its 64-bit version with a 64-bit launcher. You should vote for this feature. I also hear that adoxa (Jason Hood) has a potential fix for this issue in the works. Stay posted.

Of course, the easiest solution at the moment is to ensure that JRuby uses a 32-bit JVM. Just change (or set) your JAVA_HOME environment variable to point to a 32-bit JVM and you’re golden.

Update 2: Issue Resolved

The latest 64-bit binaries (ansi6432.zip) have fixed the issue. Just download and extract them over-top the 1.51 version.

 
 

Tags: , , ,

November 15, 2011 8

Subdirectory Checkouts with git sparse-checkout

By in git

If there is one thing I miss about SVN having switched to git (and trust me, it’s the only thing), it is the ability to checkout only a sub-tree of a repository. As of version 1.7, you can check out just a sub-tree in git as well! Now not only does git support checking out sub-directories, it does it better than subversion!

New Repository

There is a bit of a catch-22 when doing a sub-tree checkout for a new repository. In order to only checkout a sub-tree, you’ll need to have the core.sparsecheckout option set to true. Of course, you need to have a git repository before you can enable sparse-checkout. So, rather than doing a git clone, you’ll need to start with git init.

  1. Create and initialize your new repository:

    mkdir <repo> && cd <repo>
    git init
    git remote add –f <name> <url>
  2. Enable sparse-checkout:

    git config core.sparsecheckout true
  3. Configure sparse-checkout by listing your desired sub-trees in .git/info/sparse-checkout:

    echo some/dir/ >> .git/info/sparse-checkout
    echo another/sub/tree >> .git/info/sparse-checkout
  4. Checkout from the remote:

    git pull <remote> <branch>

Existing Repository

If you already have a repository, simply enable and configure sparse-checkout as above and do git read-tree.

  1. Enable sparse-checkout:

    git config core.sparsecheckout true
  2. Configure sparse-checkout by listing your desired sub-trees in .git/info/sparse-checkout:

    echo some/dir/ >> .git/info/sparse-checkout
    echo another/sub/tree >> .git/info/sparse-checkout
  3. Update your working tree:

    git read-tree -mu HEAD

Modifying sparse-checkout sub-trees

If you later decide to change which directories you would like checked out, simply edit the sparse-checkout file and run git read-tree again as above.

Be sure to read the documentation on read-tree/sparse-checkout. The sparse-tree file accepts file patterns similar to .gitignore. It also accepts negations—enabling you to specify certain directories or files to not checkout.

Now there isn’t anything that svn does better than git!

 
 

Tags: , , ,

July 5, 2011 1

JRuby on MSYS | MinGW

By in jruby, ruby

For many Windows users, like myself, the easiest way to get up and running with Ruby is to install JRuby. If you’re like me, then you may also be a Git user. Now this is just a hunch, but I would wager that if you’re a git user and interested in ruby, then there is a high probability that you’re also a fan of proper *nix shells. If all of the above hold true, keep reading.

As a Windows user without access to a proper command line shell (and too tired of fighting with cygwin), I was delighted to have a bash shell at my command after installing msysgit. The subsystem beneath msysgit is MSYS | MinGW. According to their site, MinGW (“Minimalistic GNU for Windows”) is a collection of freely available and freely distributable Windows specific header files and import libraries combined with GNU toolsets that allow one to produce native Windows programs that do not rely on any 3rd-party C runtime DLLs.[1] In addition to MinGW, MSYS is a collection of GNU utilities such as bash, make, gawk and grep to allow building of applications and programs which depend on traditionally UNIX tools to be present. It is intended to supplement MinGW and the deficiencies of the cmd shell. [2] In simplest terms, MSYS | MinGW is a lightweight Cygwin. It is ‘lightweight’ because it doesn’t provide *nix system calls or a POSIX emulation layer. However, if you’re looking for the standard *nix toolsets on Windows, MSYS | MinGW is a great utility.

NoClassDefFoundError: org/jruby/Main

With MinGW installed along with msysgit, I’ve returned to the bash shell as my primary shell on Windows. However, because MinGW is not quite *nix, nor is it really Windows, the standard JRuby installation doesn’t work out of the box. After running the JRuby installer, you pop open a bash shell and run jruby -v to verify the jruby/ruby version. Or you try to run irb or jirb to get a ruby console. Or you try to install a gem via gem install. Up pops an giant error:

Exception in thread "main" java.lang.NoClassDefFoundError: org/jruby/Main
Caused by: java.lang.ClassNotFoundException: org.jruby.Main
        at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
Could not find the main class: org.jruby.Main.  Program will exit.

Of Shells and Executables

So what’s the problem? The JRuby installer for Windows includes quite a bit of stuff in the bin directory. First and foremost is jruby.exe. This is the real executable for Windows. You’ll also see jruby.bat. This is just a wrapper which calls jruby.exe. (I assume this is an backwards-compatibility artifact from before the jruby.exe launcher existed.) You’ll also notice an extension-less jruby shell script. When you execute any of these jruby commands (jruby, irb, jirb, gem, etc) from a Windows command prompt, it will fire off jruby.exe or jruby.bat because those are the file extensions it is configured to look for. However, the MinGW bash shell prefers the jruby shell script and executes that first. Near the top of this shell script, you’ll find a block of code that determines the OS the shell is running under.

# ----- Identify OS we are running under ---------------
case "`uname`" in
  CYGWIN*) cygwin=true;;
  Darwin) darwin=true;;
esac

The shell script assumes we’re either *nix, CYGWIN or Darwin (Mac). This is understandable as the Windows Command Prompt will not attempt to execute this script. However, now that we’re using a proper bash shell with MinGW, we need to tell JRuby to expect MinGW.

The Fix(es)

The simplest fix is to delete the shell script. This way, when MinGW searches for an executable, the first it finds is jruby.exe. Alternatively, you can add the following line to the case statement in the jruby script:

MINGW*) jruby.exe "$@"; exit $?;;

This line simply checks if running on MinGW and, if so, executes jruby.exe passing along any parameters. The shell script returns with the same exit code as jruby.exe. Now the case statement should look like:

# ----- Identify OS we are running under ---------------
case "`uname`" in
  CYGWIN*) cygwin=true;;
  Darwin) darwin=true;;
  MINGW*) jruby.exe "$@"; exit $?;;
esac

OSS and GitHub to the Rescue

Thanks to JRuby being an Open Source project, and GitHub for having awesome collaboration tools, this patch was submitted and accepted to the JRuby project on GitHub. Future installations of JRuby should work on MinGW out of the box.

 
 

Tags: , , ,