C++ future and promise | "Asynchronous" Guide
이 글의 핵심
std::future and std::promise: asynchronous results, shared_state, and pairing with std::async or thread pools.
What are future and promise?
std::future and std::promise are C++11 components for asynchronous programming. They provide a mechanism to retrieve results from tasks running in separate threads, enabling clean separation between producers and consumers.
Why are they needed?:
- Result Delivery: Get results from asynchronous tasks
- Exception Propagation: Safely propagate exceptions across threads
- Synchronization: Wait for task completion without busy-waiting
- Clean API: Better than raw threads with shared variables
// ❌ Without future/promise: Complex, error-prone
std::mutex mtx;
std::condition_variable cv;
int result;
bool ready = false;
void compute() {
int value = expensive_computation();
{
std::lock_guard<std::mutex> lock(mtx);
result = value;
ready = true;
}
cv.notify_one();
}
// ✅ With future/promise: Simple, safe
std::future<int> result = std::async(compute);
int value = result.get(); // Clean and simple
std::async
Using std::async covered in Asynchronous Execution with async, you can execute tasks asynchronously and retrieve the result using future.
#include <future>
#include <iostream>
using namespace std;
int compute(int x) {
this_thread::sleep_for(chrono::seconds(1));
return x * x;
}
int main() {
// Asynchronous execution
future<int> result = async(launch::async, compute, 10);
cout << "Calculating..." << endl;
// Wait for the result
cout << "Result: " << result.get() << endl; // 100
}
How std::async works:
- Creates a task (function + arguments)
- Launches it asynchronously (new thread or deferred)
- Returns a
futureobject get()retrieves the result (blocks if not ready)
// Step-by-step execution
auto future = std::async(std::launch::async, compute, 10);
// 1. New thread created
// 2. compute(10) starts executing
// 3. Main thread continues
std::cout << "Doing other work...\n";
int result = future.get();
// 4. Blocks until compute() finishes
// 5. Returns result
promise and future
Relationship between promise and future
graph LR
A[promise] -->|get_future| B[future]
C[Producer Thread] -->|set_value| A
B -->|get| D[Consumer Thread]
style A fill:#e1f5ff
style B fill:#ffe1e1
style C fill:#e1ffe1
style D fill:#ffe1ff
void compute(promise<int> p, int x) {
this_thread::sleep_for(chrono::seconds(1));
p.set_value(x * x); // Set the result
}
int main() {
promise<int> p;
future<int> f = p.get_future();
thread t(compute, move(p), 10);
cout << "Calculating..." << endl;
cout << "Result: " << f.get() << endl; // 100
t.join();
}
Workflow
sequenceDiagram
participant Main as Main Thread
participant Promise as promise
participant Future as future
participant Worker as Worker Thread
Main->>Promise: create
Main->>Future: get_future()
Main->>Worker: start thread
Main->>Main: other work
Worker->>Worker: compute
Main->>Future: get()
Note over Main,Future: waiting...
Worker->>Promise: set_value(result)
Promise->>Future: deliver result
Future->>Main: return result
Main->>Worker: join()
launch Policies
Comparison of Policies
| Policy | Execution Time | Thread | Overhead | Suitable Tasks |
|---|---|---|---|---|
| async | Immediate | New Thread | High | CPU-intensive, long tasks |
| deferred | On get() | Current Thread | Low | Short tasks, conditional execution |
| async|deferred | Implementation-defined | Automatic | Medium | General use |
// async: new thread
auto f1 = async(launch::async, compute, 10);
// deferred: delayed execution (on get() call)
auto f2 = async(launch::deferred, compute, 10);
// automatic selection
auto f3 = async(compute, 10);
Practical Examples
Example 1: Parallel Computation
#include <future>
#include <vector>
#include <numeric>
int sumRange(int start, int end) {
int sum = 0;
for (int i = start; i < end; i++) {
sum += i;
}
return sum;
}
int main() {
const int N = 1000000;
const int numThreads = 4;
const int chunkSize = N / numThreads;
vector<future<int>> futures;
// Parallel execution
for (int i = 0; i < numThreads; i++) {
int start = i * chunkSize;
int end = (i + 1) * chunkSize;
futures.push_back(async(launch::async, sumRange, start, end));
}
// Collect results
int total = 0;
for (auto& f : futures) {
total += f.get();
}
cout << "Total: " << total << endl;
}
Example 2: File Download
#include <future>
#include <vector>
string downloadFile(const string& url) {
// Simulate download
this_thread::sleep_for(chrono::seconds(1));
return "Content from " + url;
}
int main() {
vector<string> urls = {
"http://example.com/file1",
"http://example.com/file2",
"http://example.com/file3"
};
vector<future<string>> futures;
// Parallel download
for (const auto& url : urls) {
futures.push_back(async(launch::async, downloadFile, url));
}
// Collect results
for (auto& f : futures) {
cout << f.get() << endl;
}
}
Example 3: Timeout
int longComputation() {
this_thread::sleep_for(chrono::seconds(5));
return 42;
}
int main() {
auto f = async(launch::async, longComputation);
// Wait for 2 seconds
if (f.wait_for(chrono::seconds(2)) == future_status::ready) {
cout << "Result: " << f.get() << endl;
} else {
cout << "Timeout" << endl;
}
}
Example 4: Exception Propagation
int divide(int a, int b) {
if (b == 0) {
throw runtime_error("Division by zero");
}
return a / b;
}
int main() {
auto f = async(launch::async, divide, 10, 0);
try {
int result = f.get(); // Re-throws exception
cout << result << endl;
} catch (const exception& e) {
cout << "Error: " << e.what() << endl;
}
}
shared_future
int compute() {
this_thread::sleep_for(chrono::seconds(1));
return 42;
}
int main() {
shared_future<int> sf = async(launch::async, compute).share();
// Accessible from multiple threads
thread t1([sf]() {
cout << "Thread 1: " << sf.get() << endl;
});
thread t2([sf]() {
cout << "Thread 2: " << sf.get() << endl;
});
t1.join();
t2.join();
}
Common Issues
Issue 1: Calling get() Multiple Times
// ❌ get() can only be called once
future<int> f = async(compute, 10);
int x = f.get();
// int y = f.get(); // Throws exception
// ✅ Store the result
int result = f.get();
Issue 2: Future Destruction
// ❌ Future destruction causes blocking
{
auto f = async(launch::async, compute, 10);
} // Blocks here
// ✅ Explicit wait
auto f = async(launch::async, compute, 10);
f.wait();
Issue 3: Ignoring Exceptions
// ❌ Ignoring exceptions
auto f = async(launch::async, []() {
throw runtime_error("Error");
});
// If f.get() is not called, the exception is ignored
// ✅ Handle exceptions
try {
f.get();
} catch (const exception& e) {
cout << e.what() << endl;
}
Advanced promise Usage
void compute(promise<int> p, int x) {
try {
if (x < 0) {
throw invalid_argument("Negative value not allowed");
}
p.set_value(x * x);
} catch (...) {
p.set_exception(current_exception());
}
}
int main() {
promise<int> p;
future<int> f = p.get_future();
thread t(compute, move(p), -10);
try {
cout << f.get() << endl;
} catch (const exception& e) {
cout << "Error: " << e.what() << endl;
}
t.join();
}
Production Patterns
Pattern 1: Task Pipeline
class TaskPipeline {
std::vector<std::future<void>> tasks_;
public:
template<typename F>
void addTask(F&& task) {
tasks_.push_back(std::async(std::launch::async, std::forward<F>(task)));
}
void waitAll() {
for (auto& task : tasks_) {
task.wait();
}
}
~TaskPipeline() {
waitAll(); // Ensure all tasks complete
}
};
// Usage
TaskPipeline pipeline;
pipeline.addTask([]() { processData(); });
pipeline.addTask([]() { sendNotification(); });
pipeline.addTask([]() { updateDatabase(); });
pipeline.waitAll();
Pattern 2: Result Aggregation
template<typename T>
std::vector<T> parallelMap(const std::vector<T>& input,
std::function<T(const T&)> func) {
std::vector<std::future<T>> futures;
for (const auto& item : input) {
futures.push_back(std::async(std::launch::async, func, item));
}
std::vector<T> results;
for (auto& future : futures) {
results.push_back(future.get());
}
return results;
}
// Usage
std::vector<int> numbers = {1, 2, 3, 4, 5};
auto squares = parallelMap(numbers, [](int x) { return x * x; });
Pattern 3: Timeout with Fallback
template<typename T>
T getWithTimeout(std::future<T>& future,
std::chrono::milliseconds timeout,
T fallback) {
if (future.wait_for(timeout) == std::future_status::ready) {
return future.get();
}
return fallback;
}
// Usage
auto future = std::async(std::launch::async, expensiveComputation);
int result = getWithTimeout(future, std::chrono::seconds(5), -1);
if (result == -1) {
std::cout << "Timeout, using fallback\n";
}
FAQ
Q1: async vs thread?
A:
- async: Simple API, automatic result delivery, exception propagation
- thread: Fine-grained control, manual synchronization, no direct result return
// async: Simple
auto future = std::async(compute, 10);
int result = future.get();
// thread: Manual
int result;
std::thread t([&result]() { result = compute(10); });
t.join();
Choose async for tasks that return results. Choose thread for long-running background tasks.
Q2: When should I use future?
A:
- Asynchronous tasks: I/O operations, network requests
- Parallel computation: CPU-intensive tasks that can run concurrently
- Result delivery: When you need to return values from threads
- Exception handling: When you need safe exception propagation
// Perfect for async I/O
auto file1 = std::async(readFile, "data1.txt");
auto file2 = std::async(readFile, "data2.txt");
auto data1 = file1.get();
auto data2 = file2.get();
Q3: What about performance?
A: Thread creation has overhead (~1-2ms). For small tasks (<1ms), the overhead might outweigh benefits.
// ❌ Bad: Overhead > computation
for (int i = 0; i < 1000; ++i) {
auto f = std::async([i]() { return i * 2; }); // Too much overhead
}
// ✅ Good: Batch processing
const int CHUNK_SIZE = 250;
std::vector<std::future<int>> futures;
for (int i = 0; i < 4; ++i) {
futures.push_back(std::async([i]() {
int sum = 0;
for (int j = i * CHUNK_SIZE; j < (i + 1) * CHUNK_SIZE; ++j) {
sum += j * 2;
}
return sum;
}));
}
Q4: Can future be reused?
A: No. get() moves the result out and can only be called once. Use shared_future for multiple accesses.
// ❌ future: Single access
std::future<int> f = std::async(compute);
int x = f.get();
// int y = f.get(); // Throws std::future_error
// ✅ shared_future: Multiple accesses
std::shared_future<int> sf = std::async(compute).share();
int x = sf.get();
int y = sf.get(); // OK
Q5: How do I handle timeouts?
A: Use wait_for() or wait_until() to check if the result is ready.
auto future = std::async(longComputation);
// wait_for: Relative timeout
if (future.wait_for(std::chrono::seconds(5)) == std::future_status::ready) {
int result = future.get();
} else {
std::cout << "Timeout\n";
}
// wait_until: Absolute timeout
auto deadline = std::chrono::system_clock::now() + std::chrono::seconds(5);
if (future.wait_until(deadline) == std::future_status::ready) {
int result = future.get();
}
Q6: What happens if I don’t call get()?
A: The destructor blocks until the task completes. This can cause unexpected blocking.
// ❌ Destructor blocks
{
auto f = std::async(std::launch::async, longTask);
} // Blocks here until longTask completes!
// ✅ Explicit handling
auto f = std::async(std::launch::async, longTask);
f.wait(); // Explicit wait
Q7: How do exceptions work with future?
A: Exceptions are stored in the future and re-thrown when get() is called.
auto future = std::async([]() {
throw std::runtime_error("Error!");
return 42;
});
try {
int result = future.get(); // Re-throws exception
} catch (const std::exception& e) {
std::cout << "Caught: " << e.what() << '\n';
}
Q8: What’s the difference between launch::async and launch::deferred?
A:
- launch::async: Runs immediately in a new thread
- launch::deferred: Runs lazily when
get()is called (in current thread)
// async: Immediate execution
auto f1 = std::async(std::launch::async, compute);
// compute() starts running NOW in new thread
// deferred: Lazy execution
auto f2 = std::async(std::launch::deferred, compute);
// compute() doesn't run yet
int result = f2.get(); // NOW compute() runs in current thread
Q9: Any resources for learning future/promise?
A:
- “C++ Concurrency in Action” (2nd Edition) by Anthony Williams
- cppreference.com - std::future
- cppreference.com - std::promise
- “Effective Modern C++” by Scott Meyers
Related Posts: Asynchronous Execution with async, shared_future, Thread Basics, packaged_task.
One-line Summary: std::future and std::promise provide a clean mechanism for asynchronous result delivery and exception propagation across threads.
같이 보면 좋은 글 (내부 링크)
이 주제와 연결되는 다른 글입니다.
- C++ async & launch | “비동기 실행” 가이드
- C++ shared_future | 여러 스레드에서 future 결과 공유
- C++ std::thread 입문 | join 누락·디태치 남용 등 자주 하는 실수 3가지와 해결법
- C++ packaged_task | “패키지 태스크” 가이드
이 글에서 다루는 키워드 (관련 검색어)
C++, future, promise, async, asynchronous 등으로 검색하시면 이 글이 도움이 됩니다.