GPU Developer's Guide
  • 1. Introduction
  • 2. GPU Execution Architecture in CODE.HEAAN
  • 3. Start a new project with HEaaN
    • 3-1. Create Project Directory
    • 3-2. Set up Basic Directory Structure
    • 3-3. CMake Configuration
    • 3-4. Build and Compile
    • 3-5. Run (gpu-run)
    • 3-6. Check the results
    • Additional tips
  • 4. Example Codes
    • 4-1. CUDA
    • 4-2. HEaaN End to End Example
  • HEaaN GPU Guideline
    • HEaaN GPU Component Overview
    • CudaTools
    • Device Class
    • HEaaN GPU API in use
  • Not supported features
Powered by GitBook

Copyright©️ 2025 CryptoLab, Inc. All rights reserved.

On this page

Was this helpful?

  1. 3. Start a new project with HEaaN

3-5. Run (gpu-run)

By setting set(EXECUTABLE_OUTPUT_PATH ${CMAKE_BINARY_DIR}/bin) in the root CMakeLists.txt, all compiled executable files will be organized within the bin subdirectory of your build directory.

There are two ways to execute a compiled binary according to your code.

1. Codes including GPU codes.

  • Includeing cuda runtime APIs.

    • e.g., cuda_runtime_api.h ...

  • Including HEaaN device APIs.

    • e.g., functions in the device/device.hpp, device/CudaTools.hpp

In this case, you must use built-in CLI named gpu-run.

cd bin
gpu-run my_program

2. Codes without GPU codes.

  • Pure CPU code only (standard C++ programs)

Code that runs only on the CPU can be executed directly using the standard Linux/Unix execution method, without needing the gpu-run command.

cd bin
./my_program
Previous3-4. Build and CompileNext3-6. Check the results

Last updated 7 days ago

Was this helpful?