tag:blogger.com,1999:blog-55412960003999743692024-03-13T22:04:20.645-04:00Peter Eisentraut's Blogon software development, open source, databases, and geek stuffAnonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.comBlogger137125tag:blogger.com,1999:blog-5541296000399974369.post-43424174603734876642014-10-25T15:03:00.000-04:002014-10-25T15:03:54.396-04:00New blogMy blogging continues <a href="http://peter.eisentraut.org/">here</a>.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com0tag:blogger.com,1999:blog-5541296000399974369.post-23893429655000688562014-04-01T08:36:00.000-04:002014-04-01T08:36:53.060-04:00PostgreSQL trash canThe <a href="https://github.com/petere/pgtrashcan">PostgreSQL trash can</a> is a PostgreSQL plugin that implements a trash can/wastebasket/rubbish bin/recycling container. You drop a table and it's not really gone but only moved to the trash. This allows desktop-minded users to drop tables willy-nilly while giving them the warm and fuzzy feeling of knowing that their data is still there (while giving administrators the cold and, uh, unfuzzy feeling of knowing that disk space will never really be freed again). Now they only need to think of "vacuum" as "disk defragmentation", and they'll feel right at home.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com9tag:blogger.com,1999:blog-5541296000399974369.post-53013400787442875112013-09-25T23:03:00.000-04:002013-09-25T23:03:18.708-04:00Design by committeeDesign by committee is usually a term of abuse, but sometimes it's perhaps not the worst alternative. At the opposite end of the spectrum, there is design by disconnected individuals. That is how you get
<pre>ALTER TABLE tbl OWNER TO something</pre>
but
<pre>ALTER TABLE tbl SET SCHEMA something</pre>
in PostgreSQL.
<p>
Maybe a committee faced with this inconsistency would arrive at the compromise
<pre>ALTER TABLE tbl [SET] {OWNER|SCHEMA} [TO] something</pre>
?Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com3tag:blogger.com,1999:blog-5541296000399974369.post-38859291332587474222013-08-28T22:27:00.000-04:002013-08-28T22:27:54.340-04:00Testing PostgreSQL extensions on Travis CI revisitedMy <a href="http://petereisentraut.blogspot.com/2013/07/testing-postgresql-extensions-on-travis.html">previous attempt</a> to setup up multiple-PostgreSQL-version testing on <a href="https://travis-ci.org/">Travis CI</a> worked OK, but didn't actually make good use of the features of Travis CI. So I stole, er, adapted an idea from <a href="https://github.com/clkao/plv8js"><code>clkao/plv8js</code></a>, which uses an environment variable matrix to control which version to use. This makes things much easier to manage and actually fires off parallel builds, so it's also faster. I've added this to all my repositories for PostgreSQL extensions now. (See some examples: <a href="https://github.com/petere/pglibuuid/blob/master/.travis.yml">pglibuuid</a>, <a href="https://github.com/petere/plxslt/blob/master/.travis.yml">plxslt</a>, <a href="https://github.com/petere/pgvihash/blob/master/.travis.yml">pgvihash</a>, <a href="https://github.com/petere/pgpcre/blob/master/.travis.yml">pgpcre</a>, <a href="https://github.com/petere/plsh/blob/master/.travis.yml">plsh</a>)Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com0tag:blogger.com,1999:blog-5541296000399974369.post-43870869323754015432013-08-28T12:50:00.000-04:002013-08-28T12:50:29.123-04:00Automating patch review<p>I think there are two kinds of software development organizations
(commercial or open source):</p>
<ol>
<li><p>Those who don’t do code review.</p></li>
<li><p>Those who are struggling to keep up with code review.</p></li>
</ol>
<p>PostgreSQL is firmly in the second category. We never finish commit
fests on time, and lack of reviewer resources is frequently mentioned
as one of the main reasons.</p>
<p>One way to address this problem is to recruit more reviewer resources.
That has been tried; it’s difficult. The other way is to reduce the
required reviewer resources. We can do this by automating things a
little bit.</p>
<p>So I came up with a bag of tools that does the following:</p>
<ol>
<li><p>Extract the patches from the commit fest into Git.</p></li>
<li><p>Run those patches through an automated test suite.</p></li>
</ol>
<p>The first part is done by my script <a href="https://github.com/petere/commitfest-tools/blob/master/commitfest_branches"><code>commitfest_branches</code></a>. It extracts the email message ID for the latest
patch version of each commit fest submission (either from the database or the RSS feed). From the message ID, it downloads the raw email message and
extracts the actual patch file. Then that patch is applied to the Git
repository in a separate branch. This might fail, in which case I
report that back. At the end, I have a Git repository with one branch
per commit fest patch submission. A copy of that Git repository is
made available here: <a href="https://github.com/petere/postgresql-commitfest">https://github.com/petere/postgresql-commitfest</a>.</p>
<p>The second part is done by my <a href="http://pgci.eisentraut.org/jenkins/">Jenkins instance</a>, which I have <a href="http://petereisentraut.blogspot.com/2013/01/postgresql-and-jenkins.html">written
about before</a>. It runs the same job as it runs with the normal Git
master branch, but over all the branches created for the commit fest.
At the end, you get a build report for each commit fest submission.
See the results here:
<a href="http://pgci.eisentraut.org/jenkins/view/PostgreSQL/job/postgresql_commitfest_world/">http://pgci.eisentraut.org/jenkins/view/PostgreSQL/job/postgresql_commitfest_world/</a>.
You’ll see that a number of patches had issues. Most were compiler
warnings, a few had documentation build issues, a couple had genuine
build failures. Several (older) patches failed to apply.
Those don’t show up in Jenkins at all.</p>
<p>This is not tied to Jenkins, however. You can run any other build
automation against that Git repository, too, of course.</p>
<p>There is still some manual steps required. In particular,
<code>commitfest_branches</code> needs to be run and the build reports need to be
reported back manually. Fiddling with all those branches is
error-prone. But overall, this is much less work than manually
downloading and building all the patch submissions.</p>
<p>My goal is that by the time a reviewer gets to a patch, it is ensured
that the patch applies, builds, and passes the tests. Then the
reviewer can concentrate on validating the purpose of the patch and
checking the architectural decisions.</p>
<p>What needs to happen next:</p>
<ul>
<li><p>I’d like an easier way to post feedback. Given a message ID for the
original patch submission, I need to fire off a reply email that
properly attaches to the original thread. I don’t have an easy way to do
that.</p></li>
<li><p>Those reply emails would then need to be registered in the commit
fest application. Too much work.</p></li>
<li><p>There is another component to this work flow that I have not
finalized: checking regularly whether the patches still apply
against the master branch.</p></li>
<li><p>More automated tests need to be added. This is well understood and
a much bigger problem.</p></li>
</ul>
<p>In the meantime, I hope this is going to be useful. Let me know if
you have suggestions, or send me pull requests on GitHub.</p>
Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com4tag:blogger.com,1999:blog-5541296000399974369.post-50287439446705124702013-07-17T17:05:00.000-04:002013-07-17T17:05:18.052-04:00Testing PostgreSQL extensions on Travis CII have cobbled together some scripts to be able to test PostgreSQL extensions against multiple PostgreSQL major versions on <a href="https://travis-ci.org/">Travis CI</a>. (This requires that the extension is hosted on <a href="https://github.com/">GitHub</a>.) See the <a href="https://github.com/petere/plsh/blob/master/.travis.yml">configuration for PL/sh</a> and the <a href="https://travis-ci.org/petere/plsh/builds/9203634">build output</a> as examples. Perhaps others will find this useful for their extensions as well.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com3tag:blogger.com,1999:blog-5541296000399974369.post-24242346783441885192013-07-16T21:03:00.000-04:002013-07-16T21:03:09.546-04:00Tricky shell local variables<p>I have a word of warning against improper use of <code>local</code> in shell functions.</p>
<p>If you are using shell functions, you might want to declare some
variables local to the shell function. That is good. The basic
syntax for that is</p>
<pre><code>local a b c
</code></pre>
<p>In some shells, you can also combine the <code>local</code> declaration and
assignment, like this:</p>
<pre><code>local foo=$1
local bar=$2
</code></pre>
<p>(The Debian policy even explicitly <a href="http://www.debian.org/doc/debian-policy/ch-files.html#s-scripts">allows it</a>.)</p>
<p>This is somewhat dangerous.</p>
<p>Bare shell assignment like</p>
<pre><code>foo=$bar
</code></pre>
<p>does not perform word splitting, so the above is safe even if there
are spaces in <code>$bar</code>. But the <code>local</code> command does perform
word splitting (because it can take multiple arguments, as in the
first example), so the seemingly similar</p>
<pre><code>local foo=$bar
</code></pre>
<p>is not safe.</p>
<p>This can be really confusing when you add <code>local</code> to existing code and
it starts breaking.</p>
<p>You can avoid this, of course, by always quoting everything to like</p>
<pre><code>local foo="$bar"
</code></pre>
<p>but overquoting isn't always desirable, because it can make code less
readable when commands are nested, like</p>
<pre><code>local foo="$(otherfunc "other arg")"
</code></pre>
<p>(Nesting is legal and works fine in this case, however.)</p>
<p>I suggest using <code>local</code> only for declaring variables, and using
separate assignment statements. That way, all assignments are parsed
in the same way.</p>Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com2tag:blogger.com,1999:blog-5541296000399974369.post-41168903654668980172013-06-18T21:45:00.000-04:002013-06-18T21:45:21.208-04:00Autopex: PostgreSQL extension installation magic<a href="https://github.com/petere/autopex">Autopex</a> is the brainchild of a long night at the Royal Oak. It ties together <a href="https://github.com/petere/pex">Pex</a> and event triggers to magically download and build any extension that you install. So after you have set everything up you can do, say, <code>CREATE EXTENSION plproxy</code>, and it will transparently download and build plproxy for you. (Actually, this only works if the extension name is the same as the package name. I'm planning to fix that.)
<p>
Note 1: You can't install Autopex via Pex, yet.
<p>
Note 2: I guess the next logical step would be Autoautopex, which installs Autopex and Pex automatically somehow. Patches welcome.
<p>
I suppose with logical replication, this might actually end up installing the extension code on the replication slaves as well. That would be pretty neat.
Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com4tag:blogger.com,1999:blog-5541296000399974369.post-18412555711626693392013-05-01T23:13:00.000-04:002013-05-01T23:13:03.457-04:00Moving to C++<p><a href="http://gcc.gnu.org/gcc-4.8/">GCC 4.8</a> was recently released. This is
the first GCC release that is written in C++ instead of C. Which got
me thinking ...</p>
<p>Would this make sense for PostgreSQL?</p>
<p>I think it's worth a closer look.</p>
<p>Much of GCC's job isn't actually that much different from PostgreSQL.
It parses language input, optimizes it, and produces some output. It
doesn't have a storage layer, it just produces code that someone else
runs. Also note that Clang and LLVM are written in C++. I think it
would be fair to say that these folks are pretty well informed about
selecting a programming language for their job.</p>
<p>It has become apparent to me that C is approaching a dead end.
Microsoft isn't updating their compiler to C99, advising people to
move to C++ instead. So as long as PostgreSQL (or any other project,
for that matter) wants to support that compiler, they will be stuck on
C89 forever. That's a long time. We have been carefully introducing
the odd post-C89 feature, guarded by configure checks and #ifdefs,
but that will either come to an end, or the range of compilers that
actually get the full benefit of the code will become narrower and
narrower.</p>
<p>C++ on the other hand is still a vibrant language. New standards come
out and get adopted by compiler writers. You know how some
people require Java 7 or Python 2.7 or Ruby 1.9 for their code? You wish you
could have that sort of problem for your C code! With C++ you
reasonably might.</p>
<p>I'm also sensing that at this point there are more C++ programmers
than C programmers in the world. So using C++ might help grow the
project better. (Under the same theory that supporting Windows
natively would attract hordes of Windows programmers to the project,
which probably did not happen.)</p>
<p>Moving to C++ wouldn't mean that you'd have to rewrite all your code
as classes or that you'd have to enter template hell. You could
initially consider a C++ compiler a pickier C compiler, and introduce
new language features one by one, as you had done before.</p>
<p>Most things that C++ is picky about are things that a C programmer
might appreciate anyway. For example, it refuses implicit conversions
between void pointers and other pointers, or intermixing different
enums. Actually, if you review various design discussions about the
behavior of SQL-level types, functions, and type casts in PostgreSQL,
PostgreSQL users and developers generally lean on the side of a strict
type system. C++ appears to be much more in line with that thinking.</p>
<p>There are also a number of obvious areas where having the richer
language and the richer standard library of C++ would simplify coding,
reduce repetition, and avoid bugs: memory and string handling;
container types such as lists and hash tables; fewer macros necessary;
the node management in the backend screams class hierarchy; things
like xlog numbers could be types with operators; careful use of
function overloading could simplify some complicated internal APIs.
There are more. Everyone probably has their own pet peeve here.</p>
<p>I was looking for evidence of this C++ conversion in the GCC source
code, and it's not straightforward to find. As a random example,
consider
<a href="http://gcc.gnu.org/git/?p=gcc.git;a=blob;f=gcc/gimple.c;h=64f7b1a19f2ada391b12510c9724c5c292f52090;hb=master"><code>gimple.c</code></a>.
It looks like a normal C source file at first glance. It is named
<code>.c</code> after all. But it actually uses C++ features (exercise for the
reader to find them), and the build process compiles it using a C++
compiler.</p>
<p>LWN has an <a href="https://lwn.net/Articles/542457/">article</a> about how GCC moved to C++.</p>
<p>Thoughts?</p>Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com42tag:blogger.com,1999:blog-5541296000399974369.post-88939919545373923802013-04-02T21:59:00.000-04:002013-04-02T21:59:27.152-04:00Installing multiple PostgreSQL versions on Homebrew<p>I was going to post this yesterday, but some might have thought that it was a
joke. April 1st is always an annoying day to communicate real information.</p>
<p>If you have been fond of the way Debian and Ubuntu manage multiple
PostgreSQL versions in parallel, you can now have the same on OS X
with Homebrew:</p>
<pre><code>brew tap petere/postgresql
brew install postgresql-9.2
# etc.
brew install --HEAD postgresql-common
</code></pre>
<p><code>postgresql-common</code> is the same code as in Debian, only mangled a little.</p>
<p>Now you have all the client programs symlinked through <code>pg_wrapper</code>, and
you can use the server management tools such as:</p>
<pre><code>pg_createcluster 9.2 main
pg_ctlcluster 9.2 main start
pg_lsclusters
</code></pre>
<p>Let me know if you find this useful.</p>
Links:
<ul>
<li><a href="https://github.com/petere/homebrew-postgresql"><code>homebrew-postgresql</code></a>
<li><a href="https://github.com/petere/postgresql-common/tree/homebrew"><code>postgresql-common</code> <code>homebrew</code> branch</a>
</ul>
Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com5tag:blogger.com,1999:blog-5541296000399974369.post-43893326359531337872013-02-17T23:30:00.000-05:002013-02-17T23:30:01.541-05:00Lists and TuplesI had initially found the divide between lists and tuples in Python confusing. I came from a database background, so I have a certain expectation of what a tuple might be. If you read up on what the difference is in Python, you will find that a) tuples are immutable, and b) singleton tuples use a funny syntax. So just use lists, because it's easier to read and you can't go wrong that way. Oh, and they are both sequences, another overloaded term.
<p>
(Yes, there are some details omitted here, such as that since a tuple is immutable, it is hashable and can be used as a dictionary key. But I think that is used fairly seldomly.)
<p>
Then I came across Haskell and it dawned on me: Was this just a poorly mangled feature from Haskell? I don't know the history, but it looks a bit like it. You see, Haskell also has list and tuples. Lists are delimited by square brackets, and tuples are delimited by parentheses:
<pre>let alist = [1, 2, 3]
let atuple = (1, 2, 3)</pre>
(Technically, in Python, tuples are not delimited by parentheses, but they often appear that way.) But the difference is that Haskell does not use parentheses for any other purpose, such as delimiting function arguments. It uses spaces for that. (So it looks more like a shell script at times.)
<pre>Python: len([1, 2, 3])
Haskell: length [1, 2, 3]</pre>
But in Haskell, tuples are not mutable lists and lists are not mutable tuples. Tuples and lists are quite different but complementary things. A list can only contain elements of the same type. So you can have lists
<pre>[1, 2, 3, 4, 5]
["a", "b", "c", "d"]</pre>
but not
<pre>[1, 2, "a"]</pre>
A tuple, on the other hand, can contain values of different types
<pre>(1, 2, "a")
(3, 4, "b")</pre>
A particular type combination in a tuple creates a new type on the fly, which becomes interesting when you embed tuples in a list. So you can have a list
<pre>[(1, 2, "a"), (3, 4, "b")]</pre>
but not
<pre>[(1, 2, "a"), (3, 4, 5)]</pre>
Because Haskell is statically typed, it can verify this at compile time.
<p>
If you think in terms of relational databases, the term tuple in particular makes a lot of sense in this way. A result set from a database query would be a list of tuples.
<p>
The arrival of the <tt>namedtuple</tt> also supports the notion that tuples should be thought of as combining several pieces of data of different natures, but of course this is not enforced in either tuples or named tuples.
<p>
Now, none of this is relevant to Python. Because of duck typing, a database query result set might as well be a list of lists or a tuple of tuples or something different altogether that emulates sequences. But I found it useful to understand where this syntax and terminology might have come from.
<p>
Looking at the newer classes <tt>set</tt> and <tt>frozenset</tt>, it might also help to sometimes think of a tuple as a <q>frozenlist</q> instead, because this is closer to the role it plays in Python.
Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com5tag:blogger.com,1999:blog-5541296000399974369.post-46061147629554828112013-02-14T21:56:00.000-05:002013-02-14T21:56:35.625-05:00pgindent Jenkins jobI have set up a Jenkins <a href="http://pgci.eisentraut.org/jenkins/job/postgresql_master_pgindent/">job</a> that runs pgindent. Besides checking that the procedure of running pgindent works, it also provides a <q>preview</q> of what pgindent would do with the current source (<code><a href="http://pgci.eisentraut.org/jenkins/job/postgresql_master_pgindent/lastSuccessfulBuild/artifact/pgindent.diff">pgindent.diff</a></code>), which can be educational or terribly confusing.
Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com0tag:blogger.com,1999:blog-5541296000399974369.post-9230641406363213572013-02-01T23:36:00.000-05:002013-02-01T23:36:14.843-05:00Introducing the Pex package manager for PostgreSQLI have written a new light-weight package manager for PostgreSQL, called <quote>pex</quote>. It's targeted at developers, allows easy customization, and supports multiple PostgreSQL installations.
<p>
Here is how it works:
<p>
Installation:
<pre>git clone git://github.com/petere/pex.git
cd pex
sudo make install</pre>
<p>
Install some packages:
<pre>pex init
pex install plproxy
pex search hash
pex install pghashlib</pre>
<p>
Multiple PostgreSQL installations:
<pre>pex -g /usr/local/pgsql2 install plproxy
pex -p 5433 install pghashlib</pre>
<p>
Upgrade:
<pre>pex update
pex upgrade</pre>
<p>
It works a bit like Homebrew, except that it doesn't use Ruby or a lot of metaphors. ;-)
<p>
Check it out at <a href="https://github.com/petere/pex">https://github.com/petere/pex</a>.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com2tag:blogger.com,1999:blog-5541296000399974369.post-70202625253275629522013-01-01T15:24:00.000-05:002013-01-01T15:24:23.753-05:00PostgreSQL and JenkinsA lot of places use <a href="http://jenkins-ci.org/">Jenkins</a> nowadays, including where I now work and have previously worked. I enjoy working with Jenkins, and so I always wanted try out how this would work with <a href="http://www.postgresql.org/">PostgreSQL</a>. Obviously, there would be some overlap with the <a href="http://buildfarm.postgresql.org/">build farm</a>, but that's OK. The point of the build farm, after all, is to build things in many different ways to find potential problems, and this would just support that overall effort.
<p>
So I have set this up now: <a href="http://pgci.eisentraut.org/jenkins/">http://pgci.eisentraut.org/jenkins/</a>
<p>
It's already been very helpful during the last couple of weeks that I've run this. The main point behind the effort is to automate things. These are things I do just about every day and won't have to anymore:
<ul>
<li>build PostgreSQL
<li>check for compiler warnings
<li>run various test suites
<li>do this for all supported branches
</ul>
These are things I do every couple of weeks and have now automated:
<ul>
<li>check distribution building (<code>make distcheck</code>)
<li>test build of additional documentation formats
<li><code>cpluspluscheck</code>
<li>check external web links in the documentation (The <a href="http://pgci.eisentraut.org/jenkins/job/postgresql_master_linklint/">job</a> for that currently appears to be reporting false positives. Use with caution.)
<li>test <a href="http://pgci.eisentraut.org/jenkins/job/postgresql_master_coverage/Coverage/">coverage</a> reporting
</ul>
Moreover, I have set up to build some extensions and external modules, which weren't regularly tested. (The build farm is making some efforts in this area, though.)
<p>
Actually, many of the checks I had set up immediately found problems: newly introduced compiler warnings, secondary documentation format builds broken, cpluspluscheck failing, broken links in the HTML documentation, various extensions no longer build with Postg reSQL 9.3devel.
<p>
But there is more cool stuff:
<ul>
<li>There are various RSS feeds for all builds or failed buids.
<li>You can interact with the system on mobile devices. I use JenkinsMobi for iOS.
<li>You can get up to date <a href="http://pgci.eisentraut.org/jenkins/job/postgresql_master_world/Documentation/">documentation</a> builds on a more predictable schedule.
</ul>
<p>
The one thing (just about) it doesn't do is test operating system and CPU architecture portability. Jenkins comes from a Java background, where this isn't much of an issue, and so there isn't good built-in support for that sort of thing. But anyway, we have the build farm for that.
<p>
You can get the code at <a href="http://bitbucket.org/petere/pgci">http://bitbucket.org/petere/pgci</a>. The entire setup is automated with Puppet. You can fork it and set up your own (or send me your changes), or you can run it locally using <a href="http://www.vagrantup.com/">Vagrant</a> (which is what I do to test changes).
<p>
If you have any ideas, let me know (file an issue on Bitbucket). I have plans for a number of enhancements already, foremost pg_upgrade testing. Also, let me know if there are additional extensions you want tested. I have just put in a few I use myself at the moment, but other can easily be added.
<p>
Happy New Year!Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com4tag:blogger.com,1999:blog-5541296000399974369.post-81448207839038070452012-10-01T20:20:00.000-04:002012-10-01T20:20:47.040-04:00psqlrc filesIn PostgreSQL 9.2, you can use major-version-specific <code>.psqlrc</code> files, such as <code>.psqlrc-9.2</code>. PostgreSQL 9.2 also added the "include relative" command <code>\ir</code> to psql. Combining these two, you can set up psql initialization to take advantage of any new features you want without breaking the use of old psql releases.
<p>
For example, I'd like to set up psql to automatically use <code>\x auto</code>. But if I just put that into <code>.psqlrc</code>, older psql releases will complain about an unknown command. (I usually have multiple PostgreSQL versions installed, and I share dotfiles across hosts.) On the other hand, I don't want to have to duplicate the entire <code>.psqlrc</code> file to add one command, which is where <code>\ir</code> comes in.
<p>
Here is what I use, for example:
<dl>
<dt><code>.psqlrc-9.2</code>
<dd><pre>\ir .psqlrc
\set QUIET yes
\set COMP_KEYWORD_CASE preserve-lower
\x auto
\unset QUIET</pre>
<dt><code>.psqlrc-9.3</code>
<dd><pre>\ir .psqlrc-9.2</pre>
</dl>
Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com0tag:blogger.com,1999:blog-5541296000399974369.post-89319868752013702562012-09-11T08:14:00.000-04:002012-09-11T08:14:38.545-04:00pgxnclient supports tarballs and HTTPNeed to install a PostgreSQL server add-on module? The <tt>devel</tt> branch of <a href="https://github.com/dvarrazzo/pgxnclient">pgxnclient</a> now supports this type of thing:
<pre>pgxnclient install http://pgfoundry.org/frs/download.php/3274/plproxy-2.4.tar.gz</pre>
This downloads, unpacks, builds, and installs. And the module doesn't need to be on PGXN. And of course you don't have to use HTTP; a file system location will work as well.
I think this can be very useful, especially during development, when not everything is available in packaged form, or even for deployment, if you don't want to bother packaging everything and have been installing from source anyway.
Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com0tag:blogger.com,1999:blog-5541296000399974369.post-80800772027354159802012-08-13T23:31:00.000-04:002012-08-13T23:31:58.430-04:00Funny version numbersOften, I install a new Debian package using apt-get install, and as the progress output flies by, I wonder, Whoa!, should I really be using a piece of software with <i>that</i> version number?
<p>
It says a lot, after all. If I see
<pre>tool 2.4.1-2</pre>
then I (subconsciously) think, yeah, the upstream maintainers are obviously sane, the tool has been around for a while, they have made several major and minor releases, and what I'm using has seen about one round of bug fixing, and a minimal amount of tweaking by the Debian maintainer.
<p>
On the other hand, when I see
<pre>tool 7.0.50~6.5~rc2+0.20120405-1</pre>
I don't know what went on there. The original release version 7.0.50 was somehow wrong and had to be renamed 6.5? And then the 2nd release candidate of that? And then even that wasn't good enough, and some dated snapshot had to be taken?
<p>
Now, of course, there are often reasons for things like this, but it doesn't look good, and I felt it was getting out of hand a little bit.
<p>
I tried to look into this some more and find a regular expression for a reasonably sane version number. It's difficult. This is how far I've gotten: <a href="https://gist.github.com/3345974">https://gist.github.com/3345974</a>. But this still lists more than 1500 packages with funny version numbers. Which could be
cause for slight concern.
<p>
Take a look at what this prints. You can't make some of that stuff up.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com4tag:blogger.com,1999:blog-5541296000399974369.post-31942579673774307592012-07-18T18:05:00.000-04:002012-07-18T18:05:01.335-04:00Tracing shell scripts with time stampsA random tip for shell script hackers. You know that with <code>set -x</code> you can turn on tracing, so that every command is printed before being executed.
In bash, you can also customize the output prefix by setting the <code>PS4</code> variable. The default is <code>PS4='+ '</code>.
Here is an example. I wanted to "profile" a deployment script, to see why it took so long. Ordinarily, I might have sprinkled it with <code>date</code> calls. Instead, I merely added
<pre>set -x
PS4='+\t '</pre>
near the top. <code>\t</code> stands for time stamp. (The script was already using bash explicitly, as opposed to <code>/bin/sh</code>.) That way, every line is prefixed by a time stamp, and the logs could easily be analyzed to find a possible performance bottleneck.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com6tag:blogger.com,1999:blog-5541296000399974369.post-72261113046490613252012-05-20T22:00:00.000-04:002012-05-20T22:00:03.006-04:00Base backup compression options<p>I've been looking at my PostgreSQL base backups. They are run using
the traditional</p>
<pre><code>tar -c -z -f basebackup.tar.gz $PGDATA/...
</code></pre>
<p>way (many details omitted). I haven't gotten heavily into using
<code>pg_basebackup</code> yet, but the following could apply there just as well.</p>
<p>I had found some of the base backups to be pretty slow, so I dug a
little deeper. I was surprised to find that the job was completely
CPU bound. The blocking factor was the <code>gzip</code> process. So it was
worth thinking about other compression options. (The alternative is
of course no compression, but that would waste a lot of space.)</p>
<p>There are two ways to approach this:</p>
<ul>
<li><p>Use a faster compression method.</p></li>
<li><p>Parallelize the compression.</p></li>
</ul>
<p>For a faster compression method, there is <code>lzop</code>, for example. GNU
<code>tar</code> has support for that, by using <code>--lzop</code> instead of <code>-z</code>. It
gives a pretty good speed improvement, but the compression results are
of course worse.</p>
<p>For parallelizing compression, there are parallel (multithreaded)
implementations of the well-known <code>gzip</code> and <code>bzip2</code> compression
methods, called <code>pigz</code> and <code>pbzip2</code>, respectively. You can hook these
into GNU <code>tar</code> by using the <code>-I</code> option, something like <code>-I pigz</code>.
Alternatively, put them into a pipe after <code>tar</code>, so that you can pass
them some options. Because otherwise they will bring your system to a
screeching halt! If you've never seen a system at a constant 1600%
CPU for 10 minutes, try these.</p>
<p>If you have a regular service window or natural slow time at night or
on weekends, these tools can be quite useful, because you might be
able to cut down the time for your base backup from, say 2 hours to 10
minutes. But if you need to be always on, you will probably want to
qualify this a little, by reducing the number of CPUs used for this
job. But it can still be pretty effective if you have many CPUs and
want to dedicate a couple to the compression task for a while.</p>
<p>Personally, I have settled on <code>pigz</code> as my standard weapon of choice
now. It's much faster than <code>pbzip2</code> and can easily beat
single-threaded <code>lzop</code>. Also, it produces standard <code>gzip</code> output, of
course, so you don't need to install special tools everywhere, and you
can access the file with standard tools in a bind.</p>
<p>Also, consider all of this in the context of restoring. No matter how
you take the backup, wouldn't it be nice to be able to restore a
backup almost 8 or 16 or 32 times faster?</p>
<p>I have intentionally not included any benchmark numbers here, because
it will obviously be pretty site-specific. But it should be easy to
test for everyone, and the results should speak for themselves.</p>Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com3tag:blogger.com,1999:blog-5541296000399974369.post-46726231489884618102012-05-15T16:46:00.000-04:002012-05-15T16:46:51.262-04:00My (anti-)take on database schema version management<p>There were a number of posts recently about managing schema versions
and schema deployment in PostgreSQL. I have analyzed these with great
interest, but I have concluded that they are all more or less
significantly flawed. (Of course, most of these solutions do in fact
work for someone, but they are not general enough to become canonical
go-to solutions for this problem class, in my opinion.) I have
developed a list of elimination criteria by which I can evaluate
future developments in this area. So here are some of the things that
I don't want in my schema version management system:</p>
<ul>
<li><p>Using schemas for distinguishing multiple versions (like
<a href="http://feedproxy.google.com/~r/blogspot/EzOjx/~3/HrUj6PXPD-c/schema-based-versioning-and-deployment.html">this</a>,
but that's actually more about API versioning). That simply won't
work for deploying objects that are not in schemas, such as casts,
languages, extensions, and, well, schemas.</p></li>
<li><p>Using extensions (like
<a href="http://philsorber.blogspot.com/2012/01/deploy-schemata-like-boss.html">this</a>).
Well, this could work. But extensions by themselves do nothing
about the core problem. They are just an SQL wrapper interface
around upgrade scripts. You still need to write the upgrade
scripts, order them, test them, package them. The extension
mechanism might replace the, say, shell script that would
otherwise run the upgrade files in a suitable order. Another
issue is that extensions require access to the server file system.
Changing this is being
<a href="https://commitfest.postgresql.org/action/patch_view?id=746">discussed</a>
as "inline extensions", but there is no consensus. This is a
smaller problem, but it needs to be thought about. Also, I do
need to support PostgreSQL 9.0 and earlier for little while more.</p></li>
<li><p>Requiring naming each change (patch names, like
<a href="http://www.depesz.com/2010/08/22/versioning/">this</a>). Naming
things is hard. Numbering things is easy. And how many good
names are you going to still be able to come up with after 100 or
so changes?</p>
<p>Take a lesson from file version control systems: versions are
numbers or, if it must be, hashes or the like (UUIDs have been
suggested).</p></li>
<li><p>Using a version control tool for tracking upgrade paths (like
<a href="http://justatheory.com/computers/databases/sqitch-draft.html">this</a>).
Sqitch, unlike the initial draft of this concept, doesn't actually
require a version control tool for deployment, which wouldn't have
worked for me, because what we ship is a tarball or a deb/rpm-type
package. But it still requires you to maintain some kind of
sanity in your version control history so that the tool can make
sense out of it. That sounds fragile and inconvenient. The other
choice appears to be writing the plan files manually without any
VCS involvement, but then this loses much of the apparent appeal
of this tool, and it's really no better than the "naming each
change" approach mentioned above.</p></li>
<li><p>Taking snapshots or the like of a production or staging or central
development system. Production systems and staging systems should
be off limits for this sort of thing. Central development systems
don't exist, because with distributed version control, every
developer has their own setups, branches, deployments, and world
views.</p>
<p>You could set it up so that every developer gets their own
test database, sets up the schema there, takes a dump, and checks
that in. There are going to be problems with that, including that
dumps produced by <code>pg_dump</code> are ugly and optimized for restoring,
not for developing with, and they don't have a deterministic
output order.</p></li>
<li><p>Storing the database source code in a different place or in a
different manner than the rest of the source code. This includes
using a version control system like mentioned above (meaning
storing part of the information in the version control meta
information rather than in the files that are checked into the
version control system in the normal way), using a separate
repository like Post Facto, or using something like the mentioned
staging server.</p>
<p>The source is the source, and it must be possible to check out,
edit, build, test, and deploy everything in a uniform and
customary manner.</p></li>
<li><p>Allowing lattice-like dependencies between change sets (like most
examples cited above). This sounds great on paper, especially if
you want to support distributed development in branches. But then
you can have conflicts, for example where two upgrades add a
column to the same table. Depending on the upgrade path, you end
up with different results. As your change graph grows, you will
have an exploding number of possible upgrade paths that will need
to be tested.</p>
<p>There needs to be an unambiguous, canonical state of the database
schema for a given source checkout.</p></li>
<li><p>Requiring running through all the upgrade scripts for a fresh
deployment (like
<a href="http://www.depesz.com/2010/08/22/versioning/">this</a>). There are
two problems with this. First, it's probably going to be very
slow. Fresh deployments need to be reasonably fast, because they
will be needed for automated tests, including unit tests, where
you don't want to wait for minutes to set up the basic schema.
Second, it's inefficient. Over time, you might drop columns, add
new columns, delete rows, migrate them to different tables, etc.
If you run through all those upgrade scripts, then a supposedly
fresh database will already contain a bunch of rubble, dropped
attributes, dead rows, and the like.</p>
<p>Therefore, the current version needs to be deployable from a
script that will not end up replaying history.</p></li>
<li><p>Using metalanguages or abstraction layers (like Pyrseas or
Liquibase or any of the metaformats included in various web
frameworks). It'd probably a good idea to check some of these out
for simple applications. But my concern is whether using an
abstraction layer would prevent me from using certain features.
For example, look at the
<a href="http://pyrseas.wordpress.com/feature-matrix/">Pyrseas feature matrix</a>.
It's pretty impressive. But it doesn't support extensions, roles,
or grants. So (going by that list), I can't use it. (It's being
<a href="http://pyrseas.wordpress.com/2012/04/10/pyrseas-postgresql-features-feedback-requested/">worked on</a>.)
And in a previous version, when I looked at it for a previous
project, it didn't support foreign-data functionality, so I
couldn't use it then either. And those are just the top-level
things the author thought of. Actually, the Pyrseas author has
gone through some
<a href="http://pyrseas.wordpress.com/2012/03/05/more-database-tools/">effort</a>
to have almost complete coverage of PostgreSQL DDL features, so
give this tool a try. But it won't be for everyone.</p></li>
</ul>
<p>So, I'm going to keep looking.</p>Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com9tag:blogger.com,1999:blog-5541296000399974369.post-86492232658779392722012-05-14T16:14:00.000-04:002012-05-14T16:14:18.307-04:00Time to retrain the fingers<p>For years, no decades, I've typed <code>tar tzf something</code>, <code>tar xzf
something</code>. Except when someone annoying sent an uncompressed tar
file and I had to then go and take out the <code>z</code> in the middle.</p>
<p>Then came <code>bzip2</code>, and we learned <code>tar tjf</code>, <code>tar xjf</code>. OK, I could
live with that. One emerging problem was that the tab completion now
worked the wrong way around conceptually, because you had to pick and
type the right letter first in order to see the appropriate set of
files to unpack offered for completion.</p>
<p>Then came <code>lzma</code>, which was (quick, guess?), <code>tar tJf</code>, <code>tar xJf</code>.
And then there was <code>lzop</code>, which was too boring to get its own letter,
so you had to type out <code>tar -x --lzop -f</code>.</p>
<p>But <code>lzma</code> was short-lived, because then came <code>xz</code>, which was also
<code>J</code>, because <code>lzma</code> was now too boring as well to get its own letter.</p>
<p>Oh, and there is also the old <code>compress</code>, which is <code>Z</code>, and <code>lzip</code>,
which I'd never heard of.</p>
<p>But stop that. Now there is</p>
<pre><code> -a, --auto-compress
use archive suffix to determine the compression program
</code></pre>
<p>This handles all the above compression programs, and no compression. So from now on, I always use <code>tar taf</code> and <code>tar xaf</code>. Awesome.</p>
<p>The finger movements will be almost the same on QWERTY and AZERTY, and
easier than before on QWERTZ.</p>
<p>Actually, this option is already four years old in GNU tar. Funny I'd
never heard of it until recently.</p>Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com9tag:blogger.com,1999:blog-5541296000399974369.post-15276939851510944162012-04-29T04:03:00.000-04:002012-04-29T04:03:21.433-04:00Setting the time zone on remote SSH hosts<p>The tasks: I have one or more desktop/laptop machines with varying
local time zones (because the persons using them are actually in
different time zones, or because the one person using them travels).
I also have a number of servers configured in some random time zones.
(It could be the time zone where they are physically located, or the
time zone of the person who installed it, or UTC for neutrality.)</p>
<p>Now what I would like to have happen is that if I log in using SSH
from a desktop to a server, I see time on that server in my local time
zone. For things like <code>ls -l</code>, for example. Obviously, this
<q>illusion</q> will never be perfect. Nothing (except something very
complicated) will adjust the time stamps in the syslog output, for
example. But the <code>ls -l</code> case in particular seems to come up a lot,
to check how long ago was this file modified.</p>
<p>This should be completely doable in principle, because you can set the
<code>TZ</code> environment variable to any time zone you like, and it will be
used for things like <code>ls -l</code>. But how do you get the <code>TZ</code> setting
from here to there?</p>
<p>First, you have to make the remote SSH server accept the <code>TZ</code>
environment variable. At least on Debian, this is not done by
default. So make a setting like this in <code>/etc/ssh/sshd_config</code>:</p>
<pre><code># Allow client to pass locale environment variables
AcceptEnv LANG LC_* TZ
</code></pre>
<p>You also need to make the equivalent setting on the client side,
either in <code>/etc/ssh/ssh_config</code> or in <code>~/.ssh/config</code>:</p>
<pre><code>SendEnv LANG LC_* TZ
</code></pre>
<p>Which leaves the question, how do you get your local time zone into
the <code>TZ</code> variable to pass to the remote server? The actual time zone
configuration is the file <code>/etc/localtime</code>, which belongs to glibc.
In current Debian, this is (normally) a <em>copy</em> of some file under
<code>/usr/share/zoneinfo/</code>. In the distant past, it was a symlink, which
would have made things easier, but now it's a copy, so you don't know
where it came from. But the name of the time zone is also written to
<code>/etc/timezone</code>, so you can use that.</p>
<p>The format of the <code>TZ</code> environment variable can be found in the glibc
documentation. If you skip past most of the text, you will see the
relevant parts:</p>
<blockquote>
<p>The third format looks like this:</p>
<p>:CHARACTERS</p>
<p>Each operating system interprets this format differently; in the GNU
C library, CHARACTERS is the name of a file which describes the time
zone.</p>
</blockquote>
<p>So what you could do is set</p>
<pre><code>TZ=":$(cat /etc/timezone)"
</code></pre>
<p>Better yet, for hopping through multiple SSH hosts in particular, make
sure to preserve an already set TZ:</p>
<pre><code>TZ=${TZ:-":$(cat /etc/timezone)"}
</code></pre>
<p>And finally, how does one hook this into <code>ssh</code>? The best I could
think of is a shell alias:</p>
<pre><code>alias ssh='TZ=${TZ:-":$(cat /etc/timezone)"} ssh'
</code></pre>
<p>Now this set up has a number of flaws, including:</p>
<ul>
<li><p>Possibly only works between Linux (Debian?) hosts.</p></li>
<li><p>Only works if available time zone names match.</p></li>
<li><p>Only works when calling <code>ssh</code> from the shell.</p></li>
</ul>
<p>But it practice it has turned out to be quite useful.</p>
<p>Comments? Improvements? Better ideas?</p>
<p>Related thoughts:</p>
<ul>
<li><p>With this system in hand, I have resorted to setting the time zone
on most servers to UTC, since I will see my local time zone
automatically.</p></li>
<li><p>Important for the complete server localization illusion: some ideas
on dealing with
<a href="http://vincent.bernat.im/en/blog/2011-ssh-and-locales.html">locales on remote hosts</a>.</p></li>
</ul>Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com8tag:blogger.com,1999:blog-5541296000399974369.post-70389709131148526122012-03-21T13:59:00.000-04:002012-03-21T13:59:28.453-04:00PostgreSQL and compiler warnings<p>Recently, I did some work on backpatching a few commits from PostgreSQL master, and I noticed that with the current tools, the old branches create tons of compiler warnings. In PostgreSQL 8.3, the oldest currently supported branch, a <code>make all</code> with GCC 4.6.3 produces 231 warnings! (Also note that there are only 751 <tt>.c</tt> files, so that's a warning every three files.) We do a lot of work cleaning up any and all compiler warnings, at least those issued by the latest GCC. These kinds of noisy builds are quite troublesome to work with, because it is more difficult to check whether your changes introduced any new, more serious warnings.
<p>
Let's take a look at the current number of compiler warnings in different PostgreSQL branches with different compilers:
<table border="2" cellspacing="0" cellpadding="6" rules="groups" frame="hsides">
<colgroup><col class="right" /><col class="right" /><col class="right" /><col class="right" /><col class="right" /><col class="right" />
</colgroup>
<thead>
<tr><th scope="col" class="right"></th><th scope="col" class="right">gcc 4.4</th><th scope="col" class="right">gcc 4.5</th><th scope="col" class="right">gcc 4.6</th><th scope="col" class="right">gcc 4.7</th><th scope="col" class="right">clang</th></tr>
</thead>
<tbody>
<tr><th class="right">8.3</td><td class="right">173</td><td class="right">51</td><td class="right">231</td><td class="right">207</td><td class="right">665</td></tr>
<tr><th class="right">8.4</td><td class="right">12</td><td class="right">17</td><td class="right">201</td><td class="right">201</td><td class="right">673</td></tr>
<tr><th class="right">9.0</td><td class="right">13</td><td class="right">13</td><td class="right">89</td><td class="right">89</td><td class="right">780</td></tr>
<tr><th class="right">9.1</td><td class="right">24</td><td class="right">24</td><td class="right">40</td><td class="right">40</td><td class="right">25</td></tr>
<tr><th class="right">master</td><td class="right">1</td><td class="right">1</td><td class="right">1</td><td class="right">1</td><td class="right">1</td></tr>
</tbody>
</table>
<p>
Obviously, GCC 4.6 introduced many new warnings. If you use the compiler that was current around the time the branch was originally released, you'll be better off. But even then, you should expect a few surprises. (8.3 would probably require gcc 4.3, but I don't have that available anymore.)
<p>
Fortunately, it looks as though GCC 4.7, which is currently in release candidate state, will spare us of new warnings. Also note that clang (version 3.0) is now as good as GCC, as far as noise is concerned.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com0tag:blogger.com,1999:blog-5541296000399974369.post-9059490646090057242012-03-06T00:35:00.000-05:002012-03-06T00:35:20.935-05:00PostgreSQL make install timesI have decided that <tt>make install</tt> is too slow for me. Compare: A run of <tt>make install</tt> takes about 10 seconds (details below), but a run of <tt>make all</tt> with the tree mostly up to date and using ccache for the rest usually takes about 1 or 2 seconds. You can end up wasting a lot of time if you need to do many of these build and install cycles during development. In particular, <tt>make check</tt> includes a run of <tt>make install</tt>, so all this time is added to the time it takes for tests to complete.
<p>
So let's optimize this. The times below are all medians from 5 consecutive runs, writing over an existing installation, so they all had to do the same amount of work.
<p>
This is the baseline:
<ul>
<li><tt>make install</tt> — 10.493 s
</ul>
<p>
The first change is to use a faster shell. This system is using bash as <tt>/bin/sh</tt>. Many Linux distributions now use dash instead, but for some reason I haven't changed this system during the upgrade.
<ul>
<li><tt>make install SHELL=/bin/dash</tt> — 6.344 s
</ul>
I guess I'll be switching this system soon as well then!
<p>
The next thing is to avoid installing the translation files. This exploded the number of files that need to be installed. Instead of, say, one program file, you end up installing one program file and a dozen or so translation files.
<ul>
<li><tt>make install SHELL=/bin/bash enable_nls=no</tt> — 6.890 s
<li><tt>make install SHELL=/bin/dash enable_nls=no</tt> — 4.482 s
</ul>
(In practice you would use <tt>configure --disable-nls</tt>, which is the default. The above is just a way to do this without reconfiguring.) Now I have in the past preferred to build with NLS support to be able to catch errors in that area, but considering this improvement and the availability of the <tt>make maintainer-check</tt> target, I might end up building without it more often.
<p>
Another tip I remembered from the past was to use the <tt>make -s</tt> option to avoid screen output. Depending on the operating system and whether you are logged in locally or remotely, this can be a big win. On my system, this got lost in the noise a bit, but it appeared to make a small difference over many runs.
<ul>
<li><tt>make install SHELL=/bin/bash -s</tt> — 10.511 s
<li><tt>make install SHELL=/bin/dash -s</tt> — 6.146 s
</ul>
Do add this to your arsenal anyway if you want to get maximum performance.
<p>
Next, let's replace the <tt>install-sh</tt> script that does the actual file copying. For obscure reasons, PostgreSQL always uses that shell script, instead of the <tt>/usr/bin/install</tt> program that an Autoconf-based build system would normally use. But you can override the make variables and sustitute the program you want:
<ul>
<li><tt>make install SHELL=/bin/bash INSTALL=install</tt> — 5.418 s
<li><tt>make install SHELL=/bin/dash INSTALL=install</tt> — 3.995 s
</ul>
Interestingly, the choice of shell still makes a noticeable difference, even though it's no longer used to execute <tt>install-sh</tt>.
<p>
Finally, you can also use parallel make for the installation step:
<ul>
<li><tt>make install SHELL=/bin/bash -j2</tt> — 6.538 s
<li><tt>make install SHELL=/bin/dash -j2</tt> — 4.158 s
</ul>
You can gather from these numbers that the installation process appears to be mostly CPU-bound. This system has 4 cores, so let's add some more parallelization:
<ul>
<li><tt>make install SHELL=/bin/dash -j3</tt> — 3.330 s
<li><tt>make install SHELL=/bin/dash -j4</tt> — 2.944 s
<li><tt>make install SHELL=/bin/dash -j5</tt> — 2.930 s
<li><tt>make install SHELL=/bin/dash -j6</tt> — 2.952 s
</ul>
That's probably enough.
<p>
Now let's put everything together:
<ul>
<li><tt>make install SHELL=/bin/dash enable_nls=no INSTALL=install -s -j4</tt> — 1.708 s
</ul>
Or even:
<ul>
<li><tt>make install SHELL=/bin/dash enable_nls=no INSTALL=install -s -j3</tt> — 1.654 s
</ul>
That's a very nice improvement from 10.493 s!
<p>
The problem is, it is not all that easy to pass these options to the <tt>make install</tt> calls made in <tt>make check</tt> runs. If you can and want to change your system shell, and you configure without NLS support, then you will probably already be more than half way there. Then again, I suspect most readers already have that setup anyway. For the other options, to take down the installation time to almost instantaneous, you have to do ad hoc surgery in various places. I'm looking into improving that.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com5tag:blogger.com,1999:blog-5541296000399974369.post-65241842607562949342011-11-22T14:48:00.001-05:002011-11-23T15:39:09.100-05:00git whoamiMy favorite feature in <tt>bzr</tt> (Bazaar) is the <tt>bzr whoami</tt> command, which prints what your current identity (name and email) is, as far as the repository is concerned. You can tell I haven't used <tt>bzr</tt> much if that's as far as I have gotten. But seriously, with so many Git repositories around, several project identities, <a href="http://michael-prokop.at/blog/2009/05/30/directory-specific-shell-configuration-with-zsh/">directory-specific shell configuration</a>, and so on, it's easy to get confused, and it's annoying to have to check and repair commits for correct user name and email all the time. So here is <a href="https://github.com/petere/git-whoami"><tt>git whoami</tt></a>. This has already saved me so many headaches.Anonymoushttp://www.blogger.com/profile/02849480732923051923noreply@blogger.com5