About Me

My photo
A Computer Enthusiast who's learning as much as I can, while attempting to share and re-distribute the knowledge! Enjoy! :D

Sunday, December 26, 2010

Eclipse + ShellEd plugin = Shell Scripting within Eclipse

Quick How-To for Eclipse + ShellEd plugin:

Tutorial on installing Eclipse Plugin ShellEd using the following:
  • MAC OS X (Version: 10.6.5)
  • Eclipse (Version: Helios Service Release 1)
1. Install Linux Tools Plugin
  • Within Eclipse: click Help --> Install New Software
  • Click "Available Software Sites" (link slightly below Add button)
  • Enable (check box): http://download.eclipse.org/technology/linuxtools/update
  • Click OK
  • Under "Work with:" --> Select the recently enabled linuxtools
  • Under "Linux Tools" (1st group) --> Enable: Man Page Viewer (Incubation)
  • Click Next --> Next --> Accept license agreement --> Finish
  • Restart Eclipse
2. Download and Install the latest ShellEd Plugin Build
  • Download the latest ShellEd Eclipse Plugin (.zip file) from: http://sourceforge.net/projects/shelled/files/shelled/
  • Within Eclipse: Click on Help --> Install New Software
  • Click Add --> Archive ...
  • Navigate to the downloaded location of the .zip file --> Click Open
  • Input a name for future reference --> Add Repository (Click Ok)
  • Check Box --> Next --> Next --> Accept license agreement --> Finish
  • Confirm Security Warning (Installing Unsigned Content) by Clicking OK
  • Restart Eclipse
3. Test the installation by creating a new Shell Project, setting up the Shell Interpreter, entering some code, and run the script within Eclipse.

Creating A New Shell Project
  • Create a new project: New Project Wizard --> Select Shell Script Project Wizard --> Click Next
  • Input the Project Name --> Click Finish (default setting are fine for now
Setting Up The Shell Interpreter
  • Navigate to Eclipse Preferences (Eclipse Menu --> Preferences)
  • Navigate to ShellEd Options --> Select Interpreters
  • Click Search --> Select the interpreter you would like to use --> Click OK
Entering Code
  • Insert a new file into the project --> Right click on Shell Project Name --> Select New --> Select File
  • Input file name --> Click Finish
  • Enter a few commands (Bash Interpreter for this example) and Save
Run The Script Within Eclipse
  • Click Run Menu --> Run As --> Run shell script
--
Resources

1. SourceForge: Information on setting up Interpreter within Eclipse
  • http://sourceforge.net/projects/shelled/forums/forum/399718/topic/3863713
2. Linux Tools Project Plugin Install Help
  • http://wiki.eclipse.org/Linux_Tools_Project/PluginInstallHelp
3. SourceForge ShellEd Source and Zip Files
  • http://sourceforge.net/projects/shelled/files/shelled/
4. Eclipse Marketplace ShellEd
  • http://marketplace.eclipse.org/content/shelled

Monday, July 12, 2010

Boost Libraries and Eclipse

Installing Boost Libraries

1. Download the most recent version of Boost Libraries - http://sourceforge.net/projects/boost/files/boost/

2. Within the directory where you want to put the Boost Installation, execute the following:
  • tar --bzip2 -xf /path/to/boost_1_45_0.tar.bz2
Lagatuz-MacBookPro:Downloads marklagatuz$ pwd
/Users/marklagatuz/Downloads
Lagatuz-MacBookPro:Downloads marklagatuz$ tar --bzip2 -xf boost_1_45_0.tar.bz2

You now have a working Boost Library consisting of Header files. If you would like to make sure of the compiled libraries ... continue on!

3. Set up the Boost Compiled Library
  • Run the bootstrap strict with the --help option to receive options
./bootstrap.sh --help
  • Create the directory where you would like the Boost Library Binaries to reside
Lagatuz-MacBookPro:Downloads marklagatuz$ pwd
/Users/marklagatuz/Downloads
Lagatuz-MacBookPro:Downloads marklagatuz$ mkdir boost_1_45_0_Compiled
  • Change directory into the boost installation directory (header files only)
Lagatuz-MacBookPro:Downloads marklagatuz$ cd boost_1_45_0
Lagatuz-MacBookPro:boost_1_45_0 marklagatuz$ pwd
/Users/marklagatuz/Downloads/boost_1_45_0
  • Run the bootstrap script. I've added the following options:
  • --prefix: is for my own installation directory choosing.
  • --with-libraries. is for choosing which library binaries I want installed

Lagatuz-MacBookPro:boost_1_45_0 marklagatuz$ ./bootstrap.sh --prefix=/Users/marklagatuz/Downloads/boost_1_45_0_Compiled --with-libraries=all
  • Finally, install the binaries (a /lib will be created inside the prefix directory)
Lagatuz-MacBookPro:boost_1_45_0 marklagatuz$ ./bjam install

Congratulations ... you have the Boost Libraries installed!

Integrating Boost with Eclipse

After a successful installation of Boost Libraries on your system, here are some steps to integrate the Library with Eclipse. I've installed on OS X, so a Linux flavor should be similar.

If you want to utilize the Library which are only inline and templates (non-compiled):

1. Within Eclipse: Go to Properties menu
  • Project --> Properties
2. Add the headers into your Include path during compilation
  • C/C++ Build --> Settings --> GCC C++ Compiler --> Directories
  • Add the location of your installed Boost Library Include Path (/Users/marklagatuz/Downloads/boost_1_43_0 for me)
3. Finally save the configuration

If you want to utilize the Library which are compiled:

1. Within Eclipse: Go to Properties menu
  • Project --> Properties
2. Add the headers into your Include path during compilation
  • C/C++ Build --> Settings --> GCC C++ Compiler --> Directories
  • Add the location of compiled Boost Library Include Path (/Users/marklagatuz/Downloads/boost_compiled/include for me)
3. Add the compiled Library to the Library search path during linking
  • C/C++ Build --> Settings --> GCC C++ Linker --> Libraries --> Library search path (-L)
  • Add the location of the compiled Boost Library (/Users/marklagatuz/Downloads/boost_compiled/lib for me)
4. Add the specific Library name to the Libraries
  • C/C++ Build --> Settings --> GCC C++ Linker --> Libraries --> Libraries (-l)
  • Add compiled Library name you would like to utilize
5. Finally save the configuration

You are ready to utilize the Boost Library during your development!

--
Resources

1. Boost Getting Started Guide
  • http://www.boost.org/doc/libs/1_45_0/more/getting_started/index.html

Monday, June 21, 2010

CUDA: Know your limits on global memory

I was coding away on an assignment when I ran into a conundrum: I was getting weird results when attempting to copy data onto the device. There would be instances when arrays copied onto the device would be accessible, yet inaccessible during another run.

The fact I'm coding CUDA kernels on OS X gives way to a dilemma: cuda-gdb is not available (yet) on OS X. I have to rely on old school debugging techniques ... a code walk through and print statements! After numerous tests and frustrations ... I figured out I was running into a problem with global memory:

marklagatuz$ /Developer/GPU\ Computing/C/bin/darwin/release/deviceQuery

Device 0: "GeForce 9400M"
Total amount of global memory: 265945088 bytes

The above reads approximately 265MB of global memory. I had 4 arrays consisting of 67MB each being copied onto the device. I was clearly running into memory issues. This would explain why each time a different array would cause problems.

Lesson learned: Check your device(s) limitations before coding away! Then again ... you should be doing that anyways!

Tuesday, June 15, 2010

Learned Something New (or actually a review of something old)!

Since I'm forcing myself to think in terms of OO (Object Orientation), I forgot that computers are still 1 and 0's! As I'm reading code to understand design patterns, algorithms, and methods others folks are using, I came across something I've never used before (at least in my own code): shift operators.

  • <<
  • >>
I've always thought of the chevrons as output redirection in scripting or in C++. I've forgotten they actually shift the bits either to the left or right:

(1 << 24) == 0001 1111 1111 1111 1111 1111 1111

CUDA + THRUST + Eclipse

Quickstart

Assumptions: A working CUDA environment (I'm using OS X for this example).
  • nvcc --version --> should display CUDA Version, built date, and version of tools installed.
  • ./deviceQuery from /Developer/GPU Computing/C/bin/darwin/release (for OS X) should produce output for your device
1. Download the current library from the Thrust Project (currently 1.3.0) - http://code.google.com/p/thrust/downloads/list

2. Select a location and unzip the thrust library. You can unzip the library into the default cuda include location (/usr/local/cuda/include). I prefer to unzip the library in my home directory, (specifically the Downloads directory) but it's up to the user!
  • unzip thrust-v1.3.0.zip
This will create a directory called thrust

3. Add the libraries within your project in Eclipse
  • Project Name --> Properties
  • C/C++ Build --> Settings
  • CUDA NVCC Compiler --> Includes
  • Add (On the same line as Include Paths - green + button)
I originally added /Users/marklagatuz/Downloads/thrust, but was receiving the following
errors: error: thrust/host_vector.h: No such file or directory

The code compiled after removing /thrust from the -I on the command line (absolute path up to the thrust library).

--

References:

1. Thrust QuickStartGuide
  • http://code.google.com/p/thrust/wiki/QuickStartGuide

Thursday, June 10, 2010

CUDA Quick Tips, Reference, and Cheat Sheets

Here are some quick tips and references I strung together while I'm learning CUDA

A. Size of a Grid:
  • gridDim.x (1Dimensional)
  • gridDim.x (2Dimensional, assuming a N x N Grid)

B. Size of a Block:
  • blockDim.x (1Dimensional)
  • blockDim.x (2Dimensional, assuming a N x N Block)

C. Thread Local Index within its block (assuming a 1Dimensional Block):
  • threadIdx.x

D. Block Local Index
  • blockIdx.x (1Dimensional)
  • blockIdx.x (2Dimensional) --> Current Column Index (Length) of a N x N Block
  • blockIdx.y (2Dimensional) --> Current Row Index (Height) of a N x N Block

E. Thread Global Index across the entire grid (assuming a 1 Dimensional Grid):
  • (blockDim.x * blockIdx.x) + threadIdx.x

F. Thread Local Index within its block (assuming a 2Dimensional Block):

F-1.Obtain current column index (assuming you have a N x N Block):
  • (blockIdx.x * blockDimx.x) + threadIdx.x
F-2. Obtain current row index (assuming you have a N x N Block):
  • (blockIdx.y * blockDimx.x) + threadIdx.y
Since you have a N x N Block, the Length and Height are the same.

Quick Example

N = 1024. You have to process N x N elements (1024 x 1024). You could decompose the grid as so: You could set the blockSize to 64. Then gridSize = numElements / blockSize --> gridSize = 1024 / 64 = 16. Maybe not the most efficient way, but since it's only an example it will do!

So your grid is composed of 4096 Blocks (64 x 64), and each Block is composed of 256 threads (16 x 16).

Total Blocks * Total Threasd per Block = 4096 * 256 = 1,048576 = N * N = 1024 * 1024.

To process each element serially, you would probably have a nested for loop:
for (each col)
for (each row)
process element

To access each element for processing in CUDA (assuming you are storing results in a 1D array):

  • (Global Row * Number of Elements) + Global Column
  • Global Row = (blockIdx.y * blockDimx.x + threadIdx.y)
  • Global Column = (blockIdx.x * blockDimx.x + threadIdx.x)
  • Number of Elements = N = Number of elements Length wise (1024 in my example)

More quick tips in the future ...

Tuesday, June 1, 2010

Quickstart: CUDA using Bayreuth University CUDA Toolchain for Eclipse

I've been trolling through Google for a simple solution in integrating CUDA with Eclipse, and found a University which built an Eclipse plugin. This is a fantastic solution because my previous attempts required me to create my own Makefile (which partially defeats the purpose of using Eclipse!)

Here is my Quickstart for the plugin

Assumptions:
  • A fully functional C/C++ working environment (within the Eclipse IDE and on the command line)
  • A fully functional CUDA environment (including the CUDA Driver, Toolkit, and SDK
  • This assumes you are using OS X (Linux should be quite similar)
1. Install the Plugin (Trivial)
2. Add nvcc to your Path
  • Go to Eclipse --> Preferences
  • Click on C/C++ --> Environment
  • Under Environment variables to set --> click Add
  • Name = PATH (Note: Make sure PATH are all upper case)
  • Value = /usr/local/cuda/bin
  • Apply and OK
3. Create a new CUDA Project and Setup Compile and Build Environment
  • Ctrl + mouse click --> New --> C++ project
  • Under Project type box --> Executable --> select Empty Project
  • Name your project
  • Uncheck the following: Show project types and toolchains only if they are supported on the platform
  • Under Toolchains --> select CUDA Toolchain
  • Click Next
  • Click on Advanced Settings
  • Under C/C++ Build -->Environment --> Confirm PATH is set from previous step (should be USER: PREFS under Origin Column)
  • Under C/C++ Build --> Settings --> Tool Settings Tab --> CUDA NVCC Compiler --> Includes --> add /usr/local/cuda/include
  • Under C/C++ Build --> C++ Linker --> change Command from g++ to nvcc
  • Under C/C++ Build --> C++ Linker --> Libraries --> add cudart to Libraries (-l) and add /usr/local/cuda/lib to Library search path (-L)
  • Apply and OK
At this point you should have a fully functional CUDA Eclipse environment to develop CUDA Applications. Drop in some pre-built (non SDK dependent code) into the project and build it. If you want to run some of the SDK dependent code (located in /Developer/GPU Computing/C/bin/darwin/release), please follow the instructions located at Life Of A Programmer Geek.

*** UPDATE ***

When attempting to build my project, I was getting the following error message during the build phase:

make all
Building target: CUDAToolchainProject
ld: unknown option: -oCUDAToolchainProject
I tracked the problem down to not having "whitespaces" in between the following:
  • ${OUTPUT_FLAG}${OUTPUT_PREFIX}${OUTPUT}
  • This is located at --> --> Properties --> C/C++ Build --> Settings --> C++ Linker
  • Under Expert Settings --> Command line pattern
To mitigate the problem ... just add "whitespaces" in between the following:
  • ${COMMAND} ${FLAGS} ${OUTPUT_FLAG} ${OUTPUT_PREFIX} ${OUTPUT} ${INPUTS}
However, I came across another error during the build phase:

Invoking: C++ Linker
g++ -L/usr/local/cuda/lib -o "CUDAToolchainProject" ./src/cu_mandelbrotCUDA_D.o ./src/cu_mandelbrotCUDA_H.o -lcudart
ld: warning: in ./src/cu_mandelbrotCUDA_D.o, file is not of required architecture
ld: warning: in ./src/cu_mandelbrotCUDA_H.o, file is not of required architecture
ld: warning: in /usr/local/cuda/lib/libcudart.dylib, file is not of required architecture
Undefined symbols:
"_main", referenced from:
start in crt1.10.6.o
ld: symbol(s) not found
collect2: ld returned 1 exit status
make: *** [CUDAToolchainProject] Error 1

To mitigate this problem ... I changed the C++ Linker from g++ to nvcc
  • Properties --> C/C++ Build --> Settings --> C++ Linker
  • Command --> change from g++ to nvcc

The build phase completed successfully and an executable was generated!

The next steps are optional (If you want to follow Eclipse's general project structure, follow the next steps

4. Create Source Folders (Trivial)
  • Ctrl + mouse click --> New --> Source Folder
  • Name your folder
--

Resources

1. Bayreuth University Website
  • http://www.ai3.inf.uni-bayreuth.de/software/eclipsecudaqt/updates
2. NVIDIA CUDA forum: thread 160564
  • http://forums.nvidia.com/index.php?showtopic=160564
3. Life Of A Programmer Geek
  • http://lifeofaprogrammergeek.blogspot.com/2008/07/using-eclipse-for-cuda-development.html
4. Trial & Error

Tuesday, May 25, 2010

Example of Eclipse + CUDA Integration

I'm tired of using vi to edit my .cu source files, so I decided to attempt to integrate CUDA with the Eclipse IDE.

Right off the back, when you build your CUDA project within Eclipse, it will FAIL! Determined not to use vi, I trolled Google for help. Additionally, I attempted to mimic a build similar to a simple Hello World Project.

Assumptions: Drop this Makefile (located after the Example Project Creation) at the top of your Project Directory to have a successful build.

MAC OS X

1. Create a new C++ Project
  • ctrl + mouse click --> New --> C++ Project

2. Create an Empty Project
  • Select Empty Project
  • Create a project name
  • Click Next

3. Click Advanced Settings

4. Remove Automatic Makefile Generation
  • C/C++ Build --> Makefile Generation
  • Uncheck Generate Makefile automatically

5. Apply and OK

6. Finish

7. Create a source directory within your project
  • ctrl+mouse click on project --> New --> Source Folder
  • Name the folder
  • Finish

8. Create C++ source files within the source folder

9. Create a regular file for the associated Makefile
  • ctrl+mouse click on the project --> New -->File
  • Name the file Makefile

10. Copy the example Makefile from below into your Project
(using your own information)

##############################################
#
# Makefile for CUDA
#
# A hack created by Mark Lagatuz to compile .cu files within Eclipse
#
# This makefile assumes you separate the device and host code into
# separate files:
#
# HOST: _H appended to file name (code_H.cu)
# DEVICE: _D appended to file name (code_D.cu)
#
# Replace the following with your own information:
#
# CUDA_INSTALL_PATH: /path/to/your/cuda/installation
# PROGRAM: Name of Executable
#
##############################################

# CUDA Installation Path
CUDA_INSTALL_PATH = /usr/local/cuda

# Source Folder (Relative to where Makefile is located)
SRC_FOLDER = src

# Compiler
NVCC = $(CUDA_INSTALL_PATH)/bin/nvcc

# Includes
INCLUDE = $(CUDA_INSTALL_PATH)/include

# Program or Executable
PROGRAM = mandelbrotCUDA

# Device Code
DEVICE = _D

# Host Code
HOST = _H

all : $(PROGRAM)

# Create Executable by linking *.o (host and device object's)
$(PROGRAM) : $(PROGRAM)$(DEVICE).o $(PROGRAM)$(HOST).o
$(NVCC) -o $(PROGRAM) $^

# Compile device code to an object
$(PROGRAM)$(DEVICE).o : $(SRC_FOLDER)/$(PROGRAM)$(DEVICE).cu
$(NVCC) -I $(INCLUDE) -o $@ -c $< # Compile host code to an object $(PROGRAM)$(HOST).o : $(SRC_FOLDER)/$(PROGRAM)$(HOST).cu $(NVCC) -I $(INCLUDE) -o $@ -c $< # Remove *.o files and executables clean : rm *.o $(PROGRAM) -- Resources

1. MAC OS X
  • /Developer/GPU Computing/C/common/common.mk (I utilized this file to help build my Makefile

2. Life of a Programmer Greek
  • http://lifeofaprogrammergeek.blogspot.com/2008/07/using-eclipse-for-cuda-development.html

3. How to set up CUDA in Eclipse
  • http://imonad.com/blog/how-to-set-cuda-in-eclipse/

Parallel Tools Platform (PTP) Plugin for Eclipse + OpenMPI

Quickstart

Assumption: A working Eclipse Development Environment successfully installed (JDT, J2EE, CDT).

1. Install the CDT (if not already installed). Replace Galileo with your version of Eclipse (Europa)
  • Help --> Install New Software
  • Work With --> Galileo --> http://download.eclipse.org/release/galileo
  • Programming Languages --> Eclipse C/C++ Development Tools

2. Install the PTP Plugin. Replace Galileo with your version of Eclipse (Europa), and Insert your own name for the NAME field
  • Help --> Install New Software
  • Work With --> Add
  • PTP (NAME)
  • http://download.eclipse.org/tools/ptp/releases/galileo (Location)

3. Install the following software (This is the minimal plugins I use for OpenMPI):
  • Parallel Tools Platform: Parallel Tools Platform Core
  • Parallel Tools Platform: Parallel Tools Platform End-User
  • Parallel Tools Platform: PTP Common Utilities
  • Parallel Tools Platform: PTP Parallel Language Development Tools
  • Parallel Tools Platform: PTP Scalable Debug Manager
  • Parallel Tools Platform: PTP Support for OpenMPI

You now have a fully functional environment to begin using Eclipse as your Parallel Computing IDE!

A few more steps are necessary to actually compile and run your OpenMPI code ...

Specify the MPI include path

1. Within the Eclipse IDE
  • Window --> Preferences (OS X: Eclipse --> Preferences)
  • Select Parallel Tools
  • Parallel Language Development Tools --> MPI
  • Under MPI --> Include Paths --> New

2. Navigate to the location of your include files for OpenMPI
  • /home/mlagatuz/Desktop/openmpi/include (Red Hat Enterprise Linux)
  • /usr/include (OS X) --> If I used the GUI, I needed to select /Developer/SDKs/MacOSX10.6.sdk/usr/include

6. Apply and OK

7. For some odd reason, the above steps only cover C, not C++. You will need to complete the following for C++
  • Right-Click on your project name --> Select Properties --> C/C++ General --> Paths and Symbols
  • Under Includes Tab --> Select GNU C++ --> Add
  • Navigate to your include files for OpenMPI (same as above)
  • Apply and OK

Adding a Resource Manager

This allows you to submit jobs onto your localhost. This assumes you will be running only on your own workstation, and not on a cluster.

1. Switch to the Parallel Runtime Perspective

2. Right click on the Resource Manager

3. Click on Resource Manager

4. Select Resource Manager Type: for this example it will be OpenMPI. This will be different if you have another version of MPI (MPICH).

5. During the next couple of screens, selecting the default options will suffice. You would deviate if you're working on a cluster

6. Start Resource Manager

--
Resources

1. Setup for MPI tools within the Parallel Language Development Tools
  • http://www.eclipse.org/ptp/documentation/org.eclipse.ptp.pldt.help/html/setup.html

2. Trial & Error

OpenMPI Quickstart

Setting up your environment for OpenMPI: Quickstart

Assumptions: C and C++ compilers are already installed (usually located in /usr/bin). Additionally, if you have root (or write) access to /usr/local/bin, then after untar'ing the file (step 3), remove the prefix on Step 5.

1. Download the latest openmpi source files:

  • http://www.open-mpi.org/software/ompi/v1.4/

2. Find out how many processors are available to utilize:

[mlagatuz@fedora-boot Download]$ grep 'processor.*:'\
/proc/cpuinfo | wc -l
2
[mlagatuz@fedora-boot Download]$

3. cd into the download directory and untar the file

[mlagatuz@fedora-boot Download]$ tar -xzf \
openmpi-1.4.2.tar.gz

4. cd into the newly created openmpi- directory and view the INSTALL file. Here you will find more information on how to intall OpenMPI

[mlagatuz@fedora-boot Download]$ cd openmpi-1.4.2
[mlagatuz@fedora-boot Download]$ less INSTALL

5. Run configure (./configure) and append the prefix option with your preference of binary installation location (/home/mlagatuz/Desktop/openmpi for me)
[mlagatuz@fedora-boot openmpi-1.4.2]$ ./configure \
--prefix=/home/mlagatuz/Desktop/openmpi

6. Run within the same the directory --> make all install
[mlagatuz@fedora-boot openmpi-1.4.2]$ make all install

7. cd into your home directory and edit your .bash_profile (or .cshrc) to include the binary installation location within your PATH, then source it
[mlagatuz@fedora-boot build]$ cd
[mlagatuz@fedora-boot ~]$ vi .bash_profile

PATH=$PATH:$HOME/bin:/home/mlagatuz/Desktop/openmpi/bin

You now have a fully functionally parallel processing environment!
[mlagatuz@fedora-boot ~]$ which mpic++
~/Desktop/openmpi/bin/mpic++
[mlagatuz@fedora-boot ~]$
--
Resources

1. MPI in 30 Minutes
  • http://www.linux-mag.com/id/5759
2. Trial & Error

Monday, May 24, 2010

CUDA (Version 3.0) on OS X

Installing a working CUDA Environment

Assumptions: You have a working c/c++ environment (compilers and libraries --> XCode)

I was able to install the drivers, toolkit, and SDK on my MBP running OS X 10.6.3. Please refer to the NVIDIA's CUDA website to confirm your video card is CUDA ready or you can check it here --> http://www.nvidia.com/object/cuda_gpus.html

1. Set the following variables in your .bash_profile or .cshrc (I use csh at work, but bash at home
  • PATH: /cuda/bin (export PATH=$PATH:/usr/local/cuda/bin)
  • DYLD_LIBRARY_PATH: /cuda/lib (export DYLD_LIBRARY_PATH=/usr/local/cuda/lib)

2. Download the Development Drivers, Toolkit, and SDK from:
  • http://developer.nvidia.com/object/cuda_3_0_downloads.html

3. Install the drivers and verify the installation:
Lagatuz-MacBookPro:cuda marklagatuz$ ls
include lib
Lagatuz-MacBookPro:cuda marklagatuz$ pwd
/usr/local/cuda

4. Install the Toolkit and verify the installation:
Lagatuz-MacBookPro:cuda marklagatuz$ ls
bin cudaprof doc include lib man open64 src
Lagatuz-MacBookPro:cuda marklagatuz$ pwd
/usr/local/cuda

5. Install the SDK and verify the installation:
Lagatuz-MacBookPro:cuda marklagatuz$ cd /Developer/GPU\ Computing/
Lagatuz-MacBookPro:GPU Computing marklagatuz$ ls
C OpenCL bin doc lib shared
Lagatuz-MacBookPro:GPU Computing marklagatuz$ pwd
/Developer/GPU Computing

6. Complete installation of the SDK:
  • cd into C and run make:
Lagatuz-MacBookPro:GPU Computing marklagatuz$ ls
C OpenCL bin doc lib shared
Lagatuz-MacBookPro:GPU Computing marklagatuz$ cd C
Lagatuz-MacBookPro:C marklagatuz$make
Finished building all

7. At this point after the make you should have a fully functional CUDA environment. To test out your environment:
  • cd into /Developer/GPU Computing/C/bin/darwin/release
  • execute the file deviceQuery: ./deviceQuery
Lagatuz-MacBookPro:C marklagatuz$ ls
Makefile bin doc releaseNotesData tools
Samples.html common lib src
Lagatuz-MacBookPro:C marklagatuz$ cd bin/
Lagatuz-MacBookPro:bin marklagatuz$ ls
darwin
Lagatuz-MacBookPro:bin marklagatuz$ cd darwin/
Lagatuz-MacBookPro:darwin marklagatuz$ ls
release
Lagatuz-MacBookPro:darwin marklagatuz$ cd release/
Lagatuz-MacBookPro:release marklagatuz$ ./deviceQuery

The output should give you information on your CUDA Enabled Device!

--
Resources

1. NVIDIA CUDA 3.0 Downloads
  • http://developer.nvidia.com/object/cuda_3_0_downloads.html
2. Getting started Guide for MAC PDF File
  • http://developer.download.nvidia.com/compute/cuda/3_0/docs/GettingStartedMacOS.pdf
3. Trial & Error

CUDA (Version 3.0) on Linux

Installing a working CUDA Environment

Assumptions: You have a working c/c++ environment (compilers and libraries)

I was able to install the drivers, toolkit, and SDK on Red Hat Enterprise 5 (without root access).

Please refer to NVIDIA's CUDA website to confirm your video card is CUDA ready (or you can check it here --> http://www.nvidia.com/object/cuda_gpus.html

1. Create the following directories on your Desktop (or another location specified by you):
  • Download Directory: /path/to/your/Desktop/cudaDownload (create the cudaDownload directory)
  • CUDA Installation Directory: /path/to/your/Desktop/cudadir (create the cudadir directory)
  • SDK Installation Directory: /path/to/your/Desktop/NVIDIA_GPU_COMPUTING_SDK ( create the NVIDIA_GPU_COMPUTING_SDK)

2. Set the following variables in your .bash_profile or .cshrc (I use csh at work, but bash at home)
  • PATH: /cuda/bin
  • LD_LIBRARY_PATH: /cuda/lib

3. Download the Development Drivers, Toolkit, and SDK from:
  • http://developer.nvidia.com/object/cuda_3_0_downloads.html

4. Install the drivers as root. If you've installed Proprietary NVIDIA drivers before, the process is the same. I wasn't able to install the drivers at work b/c of the root access problem:
  • Alt-Ctrl-F2
  • Log in as root (or yourself)
  • Stop your window manager: telinit 3 usually does it
  • Become root (if not already)
  • Navigate to the directory where you've downloaded the NVIDIA graphic drivers
  • Run sh .run

5. I usually reboot the system after installing the video card drivers, but I guess you could restart the window manager and log back in.

6. Install the Toolkit

7. Install the SDK:
I ran into a few problems at this point. Some Linux distributions look for glut libraries (specifically libglut.so) in /usr/lib (/usr/lib64 for 64-bit libraries). Red Hat Enterprise was complaining about -lglut library not being found when I ran the make command. A Google search led me to this website:
  • http://forums.nvidia.com/lofiversion/index.php?t46575.html
It explains on how I need to create a symbolic link (which you may or may not need to complete).

[root@rhel-boot lib] pwd
/usr/lib
[root@rhel-boot lib] ln -s /usr/lib/libglut.so.3 /usr/lib/libglut.so
[root@rhel-boot lib] cd ../lib64
[root@rhel-boot lib] ln -s /usr/lib64/libglut.so.3 /usr/lib64/libglut.so

cd into /path/to/your/Desktop/NVIDIA_GPU_Computing_SDK/C

run the make command

8. At this point after the make command you should have a fully functional CUDA Development Environment. To test your environment:
  • cd into /path/to/your/Desktop/NVIDA_GPU_Computing_SDK/C/bin/linux/release
  • execute the binary deviceQuery: ./deviceQuery

The output should give you information on your CUDA Enabled Device!

--
Resources

1. NVIDIA CUDA 3.0 Downloads
  • http://developer.nvidia.com/object/cuda_3_0_downloads.html
2. NVIDIA Forums
  • http://forums.nvidia.com/lofiversion/index.php?t46575.html
3. Trial & Error