Posts Tagged ‘unix’

MAC Address Spoofing on Mac OS X for unlimited free hour passes on xfinitywifi and CableWiFi networks

Friday, July 8th, 2016

From what I gather, xfinity charges people to “rent” wifi routers and then uses that hardware to host pay-per-use public wifi networks. These networks are usually named xfinitywifi or CableWiFi. Every 24 hours each MAC Address is granted a “$0.00 Complimentary Free Pass”:

  1. CLICK I am not an XFINITY customer
  2. CLICK Sign Up
  3. CHOOSE $0.00 for a Complimentary Hour Pass
  4. CLICK Start Session

To “spoof” a new wifi MAC Address on MAC OS X, one can issue:

ifconfig en0 | grep ether

This will spit out a number like: 70:51:81:c1:3f:6e. Record this number. To set your MAC address to a random yet valid address use:

sudo ifconfig en0 ether `openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/.$//'`

Then, later, if you want to return to your old address issue:

ifconfig en0 ether 70:51:81:c1:3f:6e

It seems that System Preferences > Network > Advanced > Hardware will reveal your original MAC address in case you forget it.

You can also place these commands as aliases in your ~/.profile:

alias random_mac="ifconfig en0 ether \`openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/.$//'\`"
alias reset_mac="ifconfig en0 ether 70:56:81:c0:3f:6d"
alias sudo='sudo '

This all assumes en0 is your wifi location. It might be en1 on other macs.

Reverse case with a ruby one-liner

Thursday, September 12th, 2013

I needed to reverse the case of every character. I used a ruby one-liner:

echo "FooBar" | ruby -ne 'puts $_.split("").map{|e| (e>="a"?e.upcase():e.downcase())}.join'

Returns

fOObAR

Use piped input like it’s from stdin in a C++ program

Tuesday, September 6th, 2011

I was dismayed to find out that the file pointer stdin works fine in a compiled C++ program from a unix/linux shell if you redirect a file to standard in like this:
Option #1:


./my_program <myinputfile

But it doesn’t work if the input is coming from a pipe like this:
Option #2:


cat myfile | ./myprogram

Now, there exist ways to read from the pipe correctly. It seems you should first determine which situation is occurring and process each situation differently. But for me this was not an option because I wanted to call a library function whose interface looked like this:


int lib_func(FILE * file_pointer);

For option #1, just calling the following works just fine:


lib_func(stdin);

But for option #2, I get a segmentation fault.

So I whipped up this small function you can save in stdin_to_temp.h:


#include <cstdio>
// Write stdin/piped input to a temporary file which can than be preprocessed as it
// is (a normal file).
// Outputs:
//   temp_file  pointer to temp file pointer, rewound to beginning of file so
//     its ready to be read
// Return true only if no errors were found
//
// Note: Caller is responsible for closing the file (tmpfile() automatically
// unlinks the file so there is no need to remove/delete/unlink the file)
bool stdin_to_temp(FILE ** temp_file);

// IMPLEMENTATION
#include <iostream>
using namespace std;

bool stdin_to_temp(FILE ** temp_file)
{
  // get a temporary file
  *temp_file = tmpfile();
  if(*temp_file == NULL)
  {
    fprintf(stderr,"IOError: temp file could not be created.\n");
    return false;
  }
  char c;
  // c++'s cin handles the stdind input in a reasonable way
  while (cin.good())
  {
    c = cin.get();
    if(cin.good())
    {
      if(1 != fwrite(&c,sizeof(char),1,*temp_file))
      {
        fprintf(stderr,"IOError: error writing to tempfile.\n");
        return false;
      }
    }
  }
  // rewind file getting it ready to read from
  rewind(*temp_file);
  return true;
}

The idea is to take advantage of the fact that in C++ the istream cin “correctly” treats stdin and piped input as the same. So if I read from cin and write it back to some temporary file then I get a file pointer that acts like how I wanted the stdin to act.

You can demo how it works with this small program. Save it in stdin_to_temp_demo.cpp


#include "stdin_to_temp.h"

/**
 * Compile with:
 *   g++ -O3 -o stdin_to_temp_demo stdin_to_temp_demo.cpp
 *
 * Run examples:
 *   cat file1 | ./stdin_to_temp_demo
 *   cat file1 | ./stdin_to_temp_demo | cat >file2
 *   cat file1 | ./stdin_to_temp_demo dummy1 dummy2 | cat >file2
 *   ./stdin_to_temp_demo <file1 | cat >file2
 *   ./stdin_to_temp_demo <file1 >file2
 *
 */

int main(int argc,char * argv[])
{
  // Process arguements and print to stderr
  for(int i = 1;i<argc;i++)
  {
    fprintf(stderr,"argv[%d] = %s\n",i,argv[i]);
  }

  FILE * temp_file;
  bool success = stdin_to_temp(&temp_file);
  if(!success)
  {
    fprintf(stderr,"Fatal Error: could not convert stdin to temp file\n");
    // try to close temp file
    fclose(temp_file);
    exit(1);
  }

  // Do something interesting with the temporary file. 
  // Read the file and write it to stdout
  char c;
  // Read file one character at a time and write to stdout
  while(fread(&c,sizeof(char),1,temp_file)==1)
  {
    fwrite(&c,sizeof(char),1,stdout);
  }
  // close file
  fclose(temp_file);
}

Surely this doesn’t not take full advantage of pipes. I think its actually defeating their intended purpose. But it allows me to have the familiar interface for my simple C++ program that I wanted.

Determine more recent of two files in bash

Friday, February 18th, 2011

Here’s a simple bash script that determines the more recent of two files”:


#!/bin/bash
# Usage:
#    morerecent file1 file2
if [[ `stat -f %c "$1"` > `stat -f %c "$2"` ]];
then
  echo "$1"
else
  echo "$2"
fi

Note: It seems stat is infamously implementation dependent so the format/parameters may be different for your machine.

Vi(m) tip #9: Copy, Cut and Paste into Mac OS X clipboard

Wednesday, June 30th, 2010

I can get a lot done in vim without ever having to use the mouse or really exit “vim world”. But the one thing that I keep falling back on is copy and pasting from and to other applications. Here’s a way to do these
operations without resorting to the mouse or foreign keystrokes. Issue these in command mode:

Cut line under cursor


:.!pbcopy

Copy line under cursor


:.!pbcopy|pbpaste

Paste as line beneath cursor


:r !pbpaste

You can also use this if you’ve made a selection in Visual Mode:

Cut current selection

(warning: this grabs the whole lines of any lines within selection)


:'<,'>!pbcopy

Copy current selection

(warning: this grabs the whole lines of any lines within selection)


:'<,'>!pbcopy|pbpaste

Note: It seems for that one you should just select the first letter of each line. Not the entire block, then it just cuts…

Update: It’s much simpler:


"*yy

Yanks the current line to the clipboard. Similarly,


"*dd

cuts the current line and


"*p

pastes the clipboard below the current line.

Download all files of certain extension from website using wget

Monday, May 17th, 2010

Issue this command in a terminal to download all mp3s linked to on a page using wget

wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off [url of website]

OR if you want to download all linked mp3s from multiple pages then make a text file containing each url on a separate line, then issue:

wget -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off -i ~/mp3blogs.txt

If the site is behind basic http authentication you can use something like:

wget --http-user [username] --http-passwd [passwd] -r -l1 -H -t1 -nd -N -np -A.mp3 -erobots=off "[url]"

Cron job to warn you when your quota is almost full

Tuesday, May 11th, 2010

When I go over my disk space quota on the CIMS server I’m locked out of my account. I can only issue the ls and cd commands. That’s right, I can’t even issue rm. Which means every time (I realize it shouldn’t happen often but it alas has) I go over the quota I need to call the help desk and have them enlarge my quota temporarily so I can delete some files.

This happened recently over the weekend that we were working on a paper for the SGP deadline and nobody was at the CIMS help desk to answer my desperate call. So from now on I have a little script that will email me when I’m close to filling my quota. Here’s the script, which I save in a file called email-quota-warning.sh:


#!/bin/bash

# grep out percentage of quota used on "home" mount
PERCENT=`quota | grep home | grep -o "[0-9]\+\%" | grep -o "[0-9]\+"`;

# if the percentage is a above some threshold then send an email
if [ "$PERCENT" \> "95" ]; then
  MESSAGE="Your quota on home is getting close to full: $PERCENT%";
  EMAIL="your-email-goes-here";
  SUBJECT="Quota Warning: $PERCENT% full";
  echo $MESSAGE | /bin/mail -s "$SUBJECT" "$EMAIL"
  echo 'echo $MESSAGE | /bin/mail -s "$SUBJECT" "$EMAIL" '
fi;

Then I set up a cron job on a CIMS machine. Because the access.cims.nyu.edu machine has a bogus version of mail (I couldn’t figure out how to make it do subjects), I set up my job on mauler.

Of course then I had trouble editting my crontab. When I issue:


crontab -e

I get the following error:


E486: Pattern not found: 's$
crontab: no changes made to crontab

I couldn’t figure out how to fix this problem so instead I just created a file called .crons with the following in it:


0 * * * * bash ~/bin/email-quota-warning.sh

Of course change the bash to wherever you’re storing the script. Then just issue:


crontab .crons

Find big directories using du and (e)grep

Thursday, March 25th, 2010

Here’s a simple script I’m using to locate big directories (larger than 1GB):


du -h | grep "^ *[0-9][0-9.]*G"

The output looks like this:


1.1G	./Art/2007
2.5G	./Art/2008/Party Freakz Pasta Party
6.3G	./Art/2008
9.2G	./Art
 24G	./Documents/Parallels/Windows 7.pvm/Windows 7-0.hdd
 24G	./Documents/Parallels/Windows 7.pvm
 24G	./Documents/Parallels
 26G	./Documents
2.5G	./Downloads/enstrophy-big-endian
 33G	./Downloads
...
100G	.

Here’s how I find big directories greater than 100MB:


du -h | egrep "^ *([0-9][0-9][0-9][0-9.]*M)|([0-9][0-9.]*G)"

To list files, too, just add the -a flag to du like this:


du -ah | grep "^ *[0-9][0-9.]*G"

Turn off rm, mv interactive prompting when ssh-ed into access.cims.nyu.edu

Wednesday, December 16th, 2009

When I ssh into access.cims.nyu.edu and issue a rm or mv command I am bombarded with prompts for every file. For example if I issue:

rm *.pdf

I have to type yes <ENTER> for every pdf in the current directory.

I have tried the -f option listed in the rm man page, but I’m still prompted. I wondered if there was a way to turn this prompting feature off. It would
be very convenient if rm-ing and mv-ing acted the same way in access.cims.nyu.edu as the other unix and linux machines I used, use and will use.

I emailed the Courant help desk and got a solution:

That’s because in the system-wide .bashrc, the mv, cp, and rm commands are aliased to “mv -i”, “cp -i”, and “rm -i”. To unalias these commands in your environment, you just need to add the following lines to the end of your ~/.bashrc.


unalias rm
unalias mv

I did just that and now everything works fine.

Test ssh connection speed

Thursday, November 19th, 2009

Having a few different ssh hosts these days it’s become convenient to be able to test my upload and download transfer rates. Here’s my SSH speed test bash script. It uses dd to generate a test file then uses scp to test the transfer rates across the network.

#!/bin/bash
# scp-speed-test.sh
# Author: Alec Jacobson alecjacobsonATgmailDOTcom
#
# Test ssh connection speed by uploading and then downloading a 10000kB test
# file (optionally user-specified size)
#
# Usage:
#   ./scp-speed-test.sh user@hostname [test file size in kBs]
#

ssh_server=$1
test_file=".scp-test-file"

# Optional: user specified test file size in kBs
if test -z "$2"
then
  # default size is 10kB ~ 10mB
  test_size="10000"
else
  test_size=$2
fi


# generate a 10000kB file of all zeros
echo "Generating $test_size kB test file..."
`dd if=/dev/zero of=$test_file bs=$(echo "$test_size*1024" | bc) \
  count=1 &> /dev/null`

# upload test
echo "Testing upload to $ssh_server..."
up_speed=`scp -v $test_file $ssh_server:$test_file 2>&1 | \
  grep "Bytes per second" | \
  sed "s/^[^0-9]*\([0-9.]*\)[^0-9]*\([0-9.]*\).*$/\1/g"`
up_speed=`echo "($up_speed*0.0009765625*100.0+0.5)/1*0.01" | bc`

# download test
echo "Testing download from $ssh_server..."
down_speed=`scp -v $ssh_server:$test_file $test_file 2>&1 | \
  grep "Bytes per second" | \
  sed "s/^[^0-9]*\([0-9.]*\)[^0-9]*\([0-9.]*\).*$/\2/g"`
down_speed=`echo "($down_speed*0.0009765625*100.0+0.5)/1*0.01" | bc`

# clean up
echo "Removing test file on $ssh_server..."
`ssh $ssh_server "rm $test_file"`
echo "Removing test file locally..."
`rm $test_file`

# print result
echo ""
echo "Upload speed:   $up_speed kB/s"
echo "Download speed: $down_speed kB/s"

Note: I’m still looking for a good way to test general internet upload and download speed via the command line… Save it in a file called scp-speed-test.sh. Then run it with the command:

bash scp-speed-test.sh user@hostname