Posts Tagged ‘linux’

Compiling C++ with C++11 support on bluehost servers

Monday, September 8th, 2014

I asked bluehost to install a modern version of gcc with C++11 support. They told me that only gcc4.3 was deemed stable.

So then I tried to compile gcc4.7 and gcc4.9 from scratch. This seemed promising but eventually I hit some apparent bug in the make routine of gcc giving me the error:

"cp: cannot stat `': No such file or directory"

Manually copying the file for make did not help.

So then I switched to trying to compile clang. This worked. I followed the instructions in this answer.

Namely, I added this to my .bashrc

export LD_LIBRARY_PATH=/usr/lib/gcc/x86_64-redhat-linux/4.4.7/:$LD_LIBRARY_PATH
export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

Then issued the following:

source .bashrc
tar xzf llvm-3.3.src.tar.gz && cd llvm-3.3.src/tools/ && tar xzf ../../cfe-3.3.src.tar.gz
cd llvm-3.3.src
mv tools/cfe-3.3.src tools/clang
./configure --prefix=$HOME/llvm-3.3.src/llvm
make -j8
make install

Now to test it out, I created a file test.cpp with this inside:

#include <iostream>
int main(int argc, char *argv[])
  const auto & hello = []()
    std::cout<<"Hello, world."<<std::endl;

Then I could try to compile with:

llvm-3.3.src/llvm/bin/clang++ -std=c++11 test.cpp -o test

But I get the error:

In file included from test.cpp:1:
In file included from /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/iostream:40:
In file included from /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/ostream:40:
In file included from /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/ios:40:
In file included from /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/exception:148:
/usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../../include/c++/4.4.7/exception_ptr.h:143:13: error: unknown type name
      const type_info*
1 error generated.

This seems to be a known bug with old “stable” versions of gcc’s standard library. The fix is to add a class declaration before the #include <iostream>:

#ifdef __clang__
    class type_info;
#include <iostream>
int main(int argc, char *argv[])
  const auto & hello = []()
    std::cout<<"Hello, world."<<std::endl;

This compiles and runs.

Perl warnings logging into blue host server from Switzerland

Wednesday, October 17th, 2012

I got the following warnings logging into my Bluehost web server from Zurich:

perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.utf-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.utf-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.utf-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

To disable them I just commented out the following line to look like:

SendEnv LANG LC_*

in the /private/etc/ssh_config file on my client-side mac.

Use piped input like it’s from stdin in a C++ program

Tuesday, September 6th, 2011

I was dismayed to find out that the file pointer stdin works fine in a compiled C++ program from a unix/linux shell if you redirect a file to standard in like this:
Option #1:

./my_program <myinputfile

But it doesn’t work if the input is coming from a pipe like this:
Option #2:

cat myfile | ./myprogram

Now, there exist ways to read from the pipe correctly. It seems you should first determine which situation is occurring and process each situation differently. But for me this was not an option because I wanted to call a library function whose interface looked like this:

int lib_func(FILE * file_pointer);

For option #1, just calling the following works just fine:


But for option #2, I get a segmentation fault.

So I whipped up this small function you can save in stdin_to_temp.h:

#include <cstdio>
// Write stdin/piped input to a temporary file which can than be preprocessed as it
// is (a normal file).
// Outputs:
//   temp_file  pointer to temp file pointer, rewound to beginning of file so
//     its ready to be read
// Return true only if no errors were found
// Note: Caller is responsible for closing the file (tmpfile() automatically
// unlinks the file so there is no need to remove/delete/unlink the file)
bool stdin_to_temp(FILE ** temp_file);

#include <iostream>
using namespace std;

bool stdin_to_temp(FILE ** temp_file)
  // get a temporary file
  *temp_file = tmpfile();
  if(*temp_file == NULL)
    fprintf(stderr,"IOError: temp file could not be created.\n");
    return false;
  char c;
  // c++'s cin handles the stdind input in a reasonable way
  while (cin.good())
    c = cin.get();
      if(1 != fwrite(&c,sizeof(char),1,*temp_file))
        fprintf(stderr,"IOError: error writing to tempfile.\n");
        return false;
  // rewind file getting it ready to read from
  return true;

The idea is to take advantage of the fact that in C++ the istream cin “correctly” treats stdin and piped input as the same. So if I read from cin and write it back to some temporary file then I get a file pointer that acts like how I wanted the stdin to act.

You can demo how it works with this small program. Save it in stdin_to_temp_demo.cpp

#include "stdin_to_temp.h"

 * Compile with:
 *   g++ -O3 -o stdin_to_temp_demo stdin_to_temp_demo.cpp
 * Run examples:
 *   cat file1 | ./stdin_to_temp_demo
 *   cat file1 | ./stdin_to_temp_demo | cat >file2
 *   cat file1 | ./stdin_to_temp_demo dummy1 dummy2 | cat >file2
 *   ./stdin_to_temp_demo <file1 | cat >file2
 *   ./stdin_to_temp_demo <file1 >file2

int main(int argc,char * argv[])
  // Process arguements and print to stderr
  for(int i = 1;i<argc;i++)
    fprintf(stderr,"argv[%d] = %s\n",i,argv[i]);

  FILE * temp_file;
  bool success = stdin_to_temp(&temp_file);
    fprintf(stderr,"Fatal Error: could not convert stdin to temp file\n");
    // try to close temp file

  // Do something interesting with the temporary file. 
  // Read the file and write it to stdout
  char c;
  // Read file one character at a time and write to stdout
  // close file

Surely this doesn’t not take full advantage of pipes. I think its actually defeating their intended purpose. But it allows me to have the familiar interface for my simple C++ program that I wanted.

Find big directories using du and (e)grep

Thursday, March 25th, 2010

Here’s a simple script I’m using to locate big directories (larger than 1GB):

du -h | grep "^ *[0-9][0-9.]*G"

The output looks like this:

1.1G	./Art/2007
2.5G	./Art/2008/Party Freakz Pasta Party
6.3G	./Art/2008
9.2G	./Art
 24G	./Documents/Parallels/Windows 7.pvm/Windows 7-0.hdd
 24G	./Documents/Parallels/Windows 7.pvm
 24G	./Documents/Parallels
 26G	./Documents
2.5G	./Downloads/enstrophy-big-endian
 33G	./Downloads
100G	.

Here’s how I find big directories greater than 100MB:

du -h | egrep "^ *([0-9][0-9][0-9][0-9.]*M)|([0-9][0-9.]*G)"

To list files, too, just add the -a flag to du like this:

du -ah | grep "^ *[0-9][0-9.]*G"

“Recipe organizer and sharing interface”

Monday, January 4th, 2010

'Recipe organizer and sharing interface' beta screenshot

“Recipe organizer and sharing interface” is a recipe organizer and sharing web interface. It’s the Ruby on Rails app I wrote for my final class project in my senior year of college.

Use the beta version of “Recipe organizer and sharing interface” I’m hosting on a CIMS linux server machine.

Turn off rm, mv interactive prompting when ssh-ed into

Wednesday, December 16th, 2009

When I ssh into and issue a rm or mv command I am bombarded with prompts for every file. For example if I issue:

rm *.pdf

I have to type yes <ENTER> for every pdf in the current directory.

I have tried the -f option listed in the rm man page, but I’m still prompted. I wondered if there was a way to turn this prompting feature off. It would
be very convenient if rm-ing and mv-ing acted the same way in as the other unix and linux machines I used, use and will use.

I emailed the Courant help desk and got a solution:

That’s because in the system-wide .bashrc, the mv, cp, and rm commands are aliased to “mv -i”, “cp -i”, and “rm -i”. To unalias these commands in your environment, you just need to add the following lines to the end of your ~/.bashrc.

unalias rm
unalias mv

I did just that and now everything works fine.

Retrieve current user’s full name, Mac OS X

Saturday, December 5th, 2009

osascript is a command that allows you to execute applescript via the command line and in script. Here’s a short command that retrieves the current users full name.

osascript -e "long user name of (system info)"

Anyone know how to do this in pure bash/unix tools?

Update: Gmail Notifier corrupted my osascript so now I have to send the bogus errors to /dev/null like this:

osascript -e "long user name of (system info)" 2>/dev/null

Here’s another way I found the long user name:

system_profiler  | grep "User Name:" | sed "s/^      User Name: \([^(]*\) (.*/\1/g"

Update: On linux consider using this:

getent passwd $USER | cut -d ":" -f 5 

Whereami, find out your physical location via command line

Saturday, December 5th, 2009

Using a wget and sed combo I found on go2linux and a little web scraping, I’ve com up with a little command line bash script to find your public ip address and then determine your physical location. Save this in a file called


# get public ip address from
public_ip=`wget -q -O - | \
sed -e 's/.*Current IP Address: //' -e 's/<.*$//'`

echo $public_ip

# get physical address from ip address from
wget -q -O - \$public_ip | \
grep "\(\(City\)\|\(State or Region\)\|\(Country\)\)<\/td>" | \
sed "s/.*<b>\([^<]*\)<\/b>.*/\1/"

Note: If you know of a more stable ip to physical location site to scape leave it below.

Synergy server and client settings and commands

Saturday, November 21st, 2009

After the recommendation of a linux friend, I have begun using synergy to share my mouse and keyboard between the new macbook pro on my desk and the old powerbook two shelves up (I do have to physically switch the DVI cable to see the right computer on my monitor though … DVI switches still cost too much).

On the server (the computer which actually owns the mouse and keyboard), named

Here is my synergy.conf file saved in ~/.synergy.conf

section: screens
section: links
    right = AJX.local
    left = Enfermera.local

Then on this computer (server) I run the command

synergys -f --config ~/.synergy.conf 

On the client (the computer not physically or emotionally connected to the mouse and keyboard), named AJX:
I run the command to start listening for mouse and keyboard coming from Enfermera:

synergyc -f Enfermera.local

Note: Another awesome feature of synergy is that your linked computers will share clipboards allowing you to copy from one computer and paste into the other.

Note: I was able to install this on Mac OS X 10.4 and Mac OS X 10.5 (both using sudo port install synergy), the two operating systems had no problems talking to each other through synergy.

Test ssh connection speed

Thursday, November 19th, 2009

Having a few different ssh hosts these days it’s become convenient to be able to test my upload and download transfer rates. Here’s my SSH speed test bash script. It uses dd to generate a test file then uses scp to test the transfer rates across the network.

# Author: Alec Jacobson alecjacobsonATgmailDOTcom
# Test ssh connection speed by uploading and then downloading a 10000kB test
# file (optionally user-specified size)
# Usage:
#   ./ user@hostname [test file size in kBs]


# Optional: user specified test file size in kBs
if test -z "$2"
  # default size is 10kB ~ 10mB

# generate a 10000kB file of all zeros
echo "Generating $test_size kB test file..."
`dd if=/dev/zero of=$test_file bs=$(echo "$test_size*1024" | bc) \
  count=1 &> /dev/null`

# upload test
echo "Testing upload to $ssh_server..."
up_speed=`scp -v $test_file $ssh_server:$test_file 2>&1 | \
  grep "Bytes per second" | \
  sed "s/^[^0-9]*\([0-9.]*\)[^0-9]*\([0-9.]*\).*$/\1/g"`
up_speed=`echo "($up_speed*0.0009765625*100.0+0.5)/1*0.01" | bc`

# download test
echo "Testing download from $ssh_server..."
down_speed=`scp -v $ssh_server:$test_file $test_file 2>&1 | \
  grep "Bytes per second" | \
  sed "s/^[^0-9]*\([0-9.]*\)[^0-9]*\([0-9.]*\).*$/\2/g"`
down_speed=`echo "($down_speed*0.0009765625*100.0+0.5)/1*0.01" | bc`

# clean up
echo "Removing test file on $ssh_server..."
`ssh $ssh_server "rm $test_file"`
echo "Removing test file locally..."
`rm $test_file`

# print result
echo ""
echo "Upload speed:   $up_speed kB/s"
echo "Download speed: $down_speed kB/s"

Note: I’m still looking for a good way to test general internet upload and download speed via the command line… Save it in a file called Then run it with the command:

bash user@hostname