Ceasar's Mind

Follow me: @Ceasar_Bautista

College Student’s Guide to Brewing Coffee

leave a comment »

Coffee seems to be ubiquitous. Walking through campus, almost everyone has a cup, seemingly at all times of day. It makes sense– Good coffee taste good, and the boost to energy can be a life-saver when you’ve just woken up or need to stay up late. But if you’re on a budget and you truly like the taste of coffee, it can be difficult to good coffee that’s also cheap.

The solution of course is to brew your own.

Why Brew?

The reasons for brewing to me were obvious:

  • I hated paying so much. $1.75 for a cup of plain coffee is way more than I would like to pay, and it only gets worse with more exquisite drinks (which are not even that much more expensive to make).
  • I hated wasting time. Spending at least ten minutes walking to and from a good cafe was quickly adding up.
  • It’s something that genuinely interests me. I want to know how to tell a good coffee from a bad one, and to be able to appreciate more exquisite drinks from different areas of the world. As far as I can tell, the best way to learn about something is to take a hand in doing it (and experiment).

Finally, let’s dispel reasons why you might not want to:

  • It expensive to get started. This is simply untrue. For my modest setup, I spent $70. And considering how cheap it is to make, I expect it to pay me back very quickly, as it costs me about $0.16 to make a cup, so I’m saving $1.50 each time, so about a month and a half.
  • I don’t have time. This is something I’ve thought about a lot– I’ve very much into automation. However, I’ve found that brewing takes very little time (maybe ~5 minutes, but most of it is downtime) and in any case it’s cheaper than walking to the nearest cafe.

Getting Started

So how to get started? I’m going to keep this short, mostly because there are already many great resources on r/coffee and coffeegeek.com, and I mostly want to save everyone the time of painstakingly crawling Amazon for the best equipment.

Effectively, you will need three things to get started:

  1. Something that can boil water.
  2. Something to brew coffee in.
  3. The coffee itself.

Buying a Kettle


Boiling water for me was a problem because I have no stove. Fortunately, there are electronic water heaters, and better yet, they are very fast at bringing water to a boil (think 3 minutes, tops). I highly recommend the utilTea electric kettle ($45). What’s great about it is that it boils water fast and it has a variable temperature setting so you can also use it for making tea. On top of that, if you look at other kettles you will notice that many of them have defects (either durability concerns, the automatic shutoff failing, or chemically tainted water). This kettle, as far I could tell, seemed to suffer none of those flaws, and with nearly 5 stars based on 141 reviews and only 5 of them being 1 star, I was fairly confident that the kettle would be at the very least not bad.

Buying a Brewer


The second thing you will need is a brewer. r/coffee recommends a French press to get started, but I say skip that and just get the Aeropress. The Aeropress is highly popular on r/coffee, and more importantly, it’s significantly cheaper than a good French press ($25 vs $80). It’s a bit unconventional, but it makes good coffee and is very easy to clean. There is also apparently a lot of experimentation you can do with it to improve your brew if you get into it, but I’ve been content so far with following the directions from the box. The only thing I dislike about it is that it is aesthetically a bit ugly, but I can take it considering the price.

Buying Coffee

The last can obviously be obtained from any grocery store. A 14oz bag of ground coffee costs me around $5.50 from Trader Joe’s, but the prices go up if you want more exotic options. This is a great deal, as, if I’ve done my math right, with two tablespoons of coffee per cup, and five tablespoons per ounce, I should get 35 cups from that (or more practically, $0.15 per cup). The only downside is that the coffee loses its freshness very quickly (ground coffee loses its flavor between 30 seconds and five minutes of you opening the bag), so if you’re in for the long haul, you’ll eventually want to grind your own beans. Given that information, I wouldn’t very much about which coffee you get exactly and I would just buy something cheap, but I would suggest adding “get a grinder” to you to-do list.

I also want to recommend grabbing some decaf while you’re at it. If you’re like me, you genuinely enjoy the taste and sometimes want a drink at night but you don’t want the caffeine. Decaf tends to be a little more expensive, but it’s well worth it.

Wrapping Up

That should get you started and hopefully save you a lot of time researching which things to get. For about $70 you can start brewing good cups for a fraction of the cost of any cafe.

I’ve had my setup for about a week now, and it’s been serving me well. I’m looking to get my own grinder soon to ensure my coffee is always fresh (not to mention, this decreases cost yet again), but I’m very content with what I have just now. As I’ve learned, coffee is actually a very deep rabbit hole, and I look forward to learning the ins-and-outs of making and appreciating a good drink for years to come.

Written by Ceasar Bautista

2012/09/12 at 09:24

Posted in Uncategorized

Manage your Terminal windows with your mouse

with 3 comments

I’m a Vim user, and naturally, I like to keep a lot of files open when I’m working. There are typically lots of related modules that need to be viewable at once, and the result is I need a bunch of windows open.

Such a workflow demands opening and positioning lots of terminal windows. The problem is, Cmd+N doesn’t quite cut it since it positions new terminals in an arbitrary manner.

One solution is to download something like Divvy, which creates a 6×6 grid that you assign windows to (and hotkey the assignments for speed). The problem with this is that for a big monitor, 6×6 is not granular enough and on top of that Divvy’s price is kind of hard to justify.

Fortunately, I’ve come up with a solution that, while I think there is some room for improvement, has so far turned out to be pretty good: Make the terminal window center itself on the mouse after spawning.

To do so is pretty simple if you’re willing to dig into Apple’s osascript. I can’t tell you much about it as I only read enough to get my hacks working, but here is how to set this hack up:

First you’ll need to define “/usr/bin/local/position”, a command which will position a terminal somewhere on your screen.

# Reposition the current window to (x, y) (offset from top-left corner)
osascript <<END
    # TODO: Grab the bounds of the actual window and center it properly
    tell application “Terminal” to set the position of window 1 to {$1 – 160, $2 – 100}

Next you’ll need to download MouseTools, which is a command line tool that lets you control your mouse. That should be installed in /usr/bin/local as well.

Next, we define the “/usr/bin/local/center”. It’s a one-liner: “MouseTools -location | xargs position”. MouseTools –location gives you the x and y of your cursor, and xargs just feeds it into the position command.

Finally, we need to make the center command execute every time a new terminal is created. For this, you can just go to “Preferences > Settings > Shell” and enter the command “center”.

And that’s it! Try opening a new terminal and watch it move to your mouse.

If my experience is worth anything, you’ll probably still need to do some dragging, but it’s far more manageable when terminals open up in the relative location you’re working.

And in case you were too lazy to actually do any of the above, all of the above is already packaged together on my Github.

Written by Ceasar Bautista

2012/07/17 at 22:39

Posted in Uncategorized

Tagged with

How to use default arguments with namedtuple

with 6 comments

I was running into some trouble earlier today trying to figure out how to use default argument with namedtuple.

According to the docs:

Default values can be implemented by using _replace() to customize a prototype instance:

>>> Account = namedtuple('Account', 'owner balance transaction_count')
>>> default_account = Account('<owner name>', 0.0, 0)
>>> johns_account = default_account._replace(owner='John')

This is of course not really what I was looking for.

Searching around some more led to the Python mailing list where a software engineer by the name of Issac Morland suggests exactly what I was looking for.

2. It would be nice to be able to have default values for named tuple fields.
Using Signature it’s easy to do this – I just specify a dictionary of defaults
at named tuple class creation time.

To which Raymond Hettinger (the creator of collections) responds:

This came-up before, but I’ll take another look at it this weekend. If it
significantly complicates the API or impacts performance, then it is a
non-starter; otherwise, I’ll put together an implementation and float it on
ASPN for comments.

And the trail ends. Not sure what the full story is.

In any case, the solution is fairly simple. Just subclass the result of namedtuple and override __new__ as follows:

from collections import namedtuple
class Move(namedtuple('Move', 'piece start to captured promotion')):
    def __new__(cls, piece, start, to, captured=None, promotion=None):
        # add default values
        return super(Move, cls).__new__(cls, piece, start, to, captured, promotion)

As you’ll notice, super is being used a little weird here, with cls being passed to both super and __new__. I’m not quite sure why that’s neccessary, but I can say I’ve tested it and it does work.

EDIT: It appears this question on SO explains the weirdness with using super together with __new__. tldr; __new__ is a static method, not a class method, so it requires an explicit passing of cls.

Written by Ceasar Bautista

2012/03/19 at 15:30

Posted in Uncategorized

Tagged with

Recursive Python Considered Harmful

with 11 comments

It somewhat recently occurred to me that recursion, despite it cleanliness, improves readability at a significant cost in memory. By piling on the stack trace, at best we get a function that uses O(n) memory, and quite possibly even one that is even worse. Given that any recursive function can be written using a while loop with O(1) memory bounds, recursion seems quite a poor choice for all practical purposes.

I brought this up at work and Alexey explained that while this is true, many (particularly, functional) compilers are intelligent enough to optimize the code for you if you use tail-recursion. Being unaware of the concept, I looked up how to convert a regular recursive function into a tail-recursive function and discovered I was wasting my time since Guido has decided not to implement tail-recursion in Python.

This begs the question then, is recursion ever useful in Python or should we go back to our code and start writing those while loops?

As I see it, there is only one reason to use recursion in Python and that is when you plan to memoize your function (which makes the function take up O(n) memory anyway). (Additionally, it may be helpful to write functions out as recursive functions in order to assist in proving things, but I think, again, the function would almost certainly need to be memoized or transformed.)

In all other cases, I believe while loops are the way to go. Besides the obvious problems with memory, there are a few other points worth mentioning.

For one, Python caps the stack height at 1000. While for some functions this may be okay, if you have any interest in scaling, this limit is way too small.

Additionally, function call overhead is Python is extremely slow. (Source: the official Python wiki.)

Anyway, that’s my two cents.

Written by Ceasar Bautista

2011/10/23 at 00:51

Posted in Uncategorized

Tagged with ,

Games as Economies

with 2 comments

In a previous post, I wrote about how it’s impossible to price an object in a game according to a systematic formula, barring games of limited complexity, and objects that cover the same span (that is, there are just different multiples of the same vector). Instead I claimed at the time it was just arbitrary- that designers could set the prices to whatever they liked and that the game would always be fair so long as each player had equal opportunity. Thus, the designer’s job is really just to play with the prices until it produces the interplay he is looking for.

In another post, I wrote about how research in RTS games, and why spending resources in order to have the option to train new units can payoff. While upgrades obviously boost the strength of an army, research unlocks news units for a price, and that price is only worth paying if one can expect to use the unlocked units in such a way, that the utility of their use exceeds the initial cost of research. Thus, the price of research is also arbitrary.

Having since studied microeconomics, I’d like to revisit these topics.

An indifference curve is a line that shows combinations of goods among which a consumer is indifferent.

Economists call the phenomenon I just described the marginal rate of substitution or MRS for short. Formally, the MRS is defined as “the rate at which a person will give up good y for good x while remaining indifferent”. In other words, it’s the price at which you are willing to buy something using something else (ie, how much you are willing to shell out for a can of soda). What’s interesting about the MRS is that it changes- the more a person has of good x, the more one is willing to trade of good x for good y. Said another way, because billionaires have so much money, they don’t mind paying $5.00 for a hot dog.

This is a far more intuitive way of looking at things than trying to predict prices from the attributes of the game objects. In short, all one needs to understand now is that players will buy an object when the utility of a object exceeds its cost.

While the main point I wanted to convey has been made, I want to just put down some related ideas that don’t exactly deserve a post of their own but that I think are worth sharing.

-If you are familiar with Dominion, you may know MRS as “the Silver test”. (If you are not familiar with Dominion, all you need to know is that players regularly face the choice of buying cards with special effects or treasures, such as a Silver, which increase income.) That is, when making a non-trivial purchasing decision, one always has to consider if the object at hand is in fact better than a Silver. What I find most interesting about the Silver test is how many players completely fail to pick up this rule, instead being regularly mislead by the incorrect assumption that “things that cost more must be better”. Certainly changed a few paradigms of mine after noticing what was going on.

-Knowing when to buy what is the backbone of many games. In Dominion, playing cards is fairly trivial- but the decisions of which cards to buy each turn is often rather complex, and in the majority of cases, determines who wins. Likewise, in StarCraft, micro-ing units is fairly simple- but, again, the decisions of which units to buy and which tech to research is far more complex, and is far more important than any tactical feat. In short, your economics textbook may be more valuable than The Art of War.

Written by Ceasar Bautista

2011/08/27 at 19:39

Python Tips

leave a comment »

The “with” statement

This is just incredibly beautiful code.

Check out the source for more details on Python’s “with” statement if you’re interested.

Get current directory

Too useful not to know.  Source is here.

max and min accept a sorting function

Instead of writing out this:

def best(candidates, scoref):
    '''Use scoref to find the candidate with the highest score.'''
    highest_score = float('-inf')
    best = None
    for candidate in candidates:
	candidate_score = scoref(candidate)
        if candidate_score &gt; highest_score:
            highest_score = candidate_score
            best = candidate
    return best

You can just write:

max(iterable, key=scoref)

Don’t forget the “key” keyword, or Python will think you are comparing the list against the function. This is pretty cool because it makes things like getting the max value in a dictionary trivial.

>>> a = {'a': 1, 'b':2, 'c':3}
>>> a
{'a': 1, 'c': 3, 'b': 2}
>>> max(a, key=lambda x: a[x])

Written by Ceasar Bautista

2011/08/02 at 22:26

Posted in Uncategorized

Tagged with ,

Prime Generator

with 2 comments

Inspired by the last post, just thought this was too cool not to share. The function generates primes!

def prime_gen():
    '''Generate primes.'''
    offset = 2
    index = offset
    sieve = range(offset, offset ** 2)
    found = []
    while 1:
        start = index ** 2
        end = (index + 1) ** 2
        sieve.extend(range(start, end))

        num = sieve[index - offset]
        if num:
            sieve = sieve[index - offset:]
            offset = num
            yield num

        i = 0
        while i &lt; len(found):
            curr = found[i]
            j = start / curr * curr
            while j &lt; len(sieve) + offset:
                sieve[j - offset] = 0
                j += curr
            i += 1
        index += 1</pre>

Written by Ceasar Bautista

2011/07/10 at 03:57

Posted in Uncategorized

Tagged with , ,