This guide explains how to use the Float API to import time off from a CSV file. Use this process when migrating data from another tool or when syncing existing time off records from an offline source.
β
βπ Note: You do not need to be a developer to follow this guide, but basic terminal familiarity helps. This article walks through installing Python, preparing your CSV, configuring the script, and running a dry run before sending data.
β
Step 1: Install Python
Python is a programming language we will use to run the import script.
Click the yellow Download Python button for your operating system.
Run the installer:
Windows: During installation, make sure to check "Add Python to PATH".
Mac: Use the
.pkginstaller provided.
Once installed, open your terminal:
Windows: Open "Command Prompt" or "Windows Terminal"
Mac: Use the built-in "Terminal" app
Verify Python is installed by running:
python3 --version
You should see a version number like
Python 3.14.X
Step 2: Install required Python libraries
Weβll use pandas to read CSVs and requests to call the Float API:
pip3 install pandas requests python-dateutil
Step 3: Prepare your CSV file
Create a CSV with either of the following header sets:
person_name, timeoff_type_name, start_date, end_date, hours, full_day, notes
Example data:
β
βJason Manning,Paid Time Off,2025-11-17,2025-11-21,4,,parental leave
start_date/end_dateinYYYY-MM-DDUse either
hours(e.g.,4) or setfull_dayto1notesis optional
Save the file as something like import_timeoffs.csv.
Step 4: Get your Float API key
Log in to your team's Float account as the Account Owner. Only the Account Owner can access the API key.
Go to Team Settings > Integrations.
Copy the API Key.
Keep this key secret!
You can explore Floatβs full API documentation at https://developer.float.com/.
Step 5: Copy and configure the script
Paste the script below into a file named import_timeoff_to_float.py, then update the config at the top.
β
#!/usr/bin/env python3
import hashlib
import json
import time
from datetime import datetime
import pandas as pd
import requests
# === Step 1: CSV + Auth Config ===
CSV_FILE = "import_file_name.csv" # your time off CSV file
DRY_RUN = False
SHOW_SUCCESS_OUTPUT = False
BEARER_TOKEN = "Float API Key"
# === Error log file ===
ERROR_LOG_FILE = "upload-errors-timeoff.txt"
# Initialize error log with start timestamps (local + UTC)
with open(ERROR_LOG_FILE, "w") as f:
f.write("=== Float Time Off Import Errors ===\n")
f.write(
"Import started: "
f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} (local), "
f"{datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')} UTC\n\n"
)
# === Step 2: Float API URLs ===
BASE_URL = "https://api.float.com/v3"
PEOPLE_API_URL = f"{BASE_URL}/people"
TIMEOFF_TYPES_API_URL = f"{BASE_URL}/timeoff-types"
TIMEOFF_API_URL = f"{BASE_URL}/timeoffs"
PER_PAGE = 200
HEADERS = {
"Authorization": f"Bearer {BEARER_TOKEN}",
"Content-Type": "application/json",
"User-Agent": "Float Time Off Importer (support@float.com)",
}
# === Utility Functions ===
def sanitize(value):
"""Return a trimmed string or empty string if value is NaN/blank."""
if pd.isna(value) or str(value).strip() in ("", "NaN", "nan"):
return ""
return str(value).strip()
def log_error(row_num, message, payload=None):
"""Append an error entry to upload-errors file."""
with open(ERROR_LOG_FILE, "a") as f:
f.write(f"Row {row_num}: {message}\n")
if payload:
try:
f.write(f"Payload: {json.dumps(payload, indent=2)}\n")
except Exception:
f.write(f"Payload: {str(payload)}\n")
f.write("\n")
def generate_color(key: str) -> str:
"""Deterministically generate Float-compatible color (RRGGBB / no '#')."""
if not key:
return "CCCCCC"
h = hashlib.sha1(key.encode("utf-8")).hexdigest()
return h[:6].upper()
def fetch_all(endpoint, label):
"""Fetch paginated results from Float."""
try:
print(f"π¦ Fetching existing {label} ...")
cur_page, last_page = 1, 1
items = []
while cur_page <= last_page:
resp = requests.get(
f"{endpoint}?per-page={PER_PAGE}&page={cur_page}",
headers=HEADERS,
)
resp.raise_for_status()
last_page = int(resp.headers.get("x-pagination-page-count") or 1)
items.extend(resp.json())
cur_page += 1
print(f"β Found {len(items)} {label}")
return items
except Exception as e:
msg = f"Failed to fetch {label}: {e}"
print("β " + msg)
log_error("FETCH", msg)
return []
# === Step 5: Load CSV File ===
try:
df = pd.read_csv(CSV_FILE)
except Exception as e:
print(f"β Failed to read CSV: {e}")
log_error("CSV", f"Failed to read CSV: {e}")
raise SystemExit(1)
required_cols = ["person_name", "timeoff_type_name", "start_date", "end_date", "hours"]
missing = [c for c in required_cols if c not in df.columns]
if missing:
msg = f"CSV missing required columns: {', '.join(missing)}"
print(f"β {msg}")
log_error("CSV", msg)
raise SystemExit(1)
print("β CSV validation passed.\n")
# === Step 6: Fetch Existing Data ===
people = fetch_all(PEOPLE_API_URL, "people")
timeoff_types = fetch_all(TIMEOFF_TYPES_API_URL, "time off types")
# === Step 7: Build Lookup Maps ===
people_map = {sanitize(p["name"]).lower(): p for p in people}
timeoff_types_map = {
sanitize(t["timeoff_type_name"]).lower(): t for t in timeoff_types
}
print("\nπ Starting Time Off import...\n")
# === Step 8: Main Import Loop ===
for index, row in df.iterrows():
row_num = index + 1
try:
# === PERSON ===
person_name = sanitize(row.get("person_name"))
if not person_name:
raise Exception("Missing person_name")
pkey = person_name.lower()
if pkey not in people_map:
raise Exception(f"Unknown person '{person_name}'")
person_id = people_map[pkey]["people_id"]
# === TIME OFF TYPE ===
tot_name = sanitize(row.get("timeoff_type_name"))
if not tot_name:
raise Exception("Missing timeoff_type_name")
tkey = tot_name.lower()
if tkey in timeoff_types_map:
timeoff_type_id = timeoff_types_map[tkey]["timeoff_type_id"]
else:
new_type_payload = {
"timeoff_type_name": tot_name,
"color": generate_color(tot_name),
"active": 1,
}
if DRY_RUN:
print(f"[Dry-run] Creating timeoff type: {new_type_payload}")
timeoff_type_id = 900000 + row_num
else:
resp = requests.post(
TIMEOFF_TYPES_API_URL,
json=new_type_payload,
headers=HEADERS,
)
if resp.ok:
t = resp.json()
timeoff_type_id = t["timeoff_type_id"]
timeoff_types_map[tkey] = t
else:
raise Exception(
f"POST timeoff-types failed: "
f"{resp.status_code} - {resp.text}"
)
# === FULL DAY + HOURS LOGIC ===
full_day_raw = row.get("full_day")
full_day = int(full_day_raw) if not pd.isna(full_day_raw) else None
is_full_day = full_day == 1
hours_raw = row.get("hours")
hours_num = float(hours_raw) if not pd.isna(hours_raw) else None
# If full-day time off: allow missing hours or hours below 0.01
if is_full_day:
if hours_num is not None and hours_num < 0.01:
hours_num = None
# === TIMEOFF PAYLOAD ===
repeat_state = row.get("repeat_state")
repeat_end = sanitize(row.get("repeat_end_date"))
status_val = row.get("status")
notes = sanitize(row.get("timeoff_notes"))
timeoff_payload = {
"timeoff_type_id": timeoff_type_id,
"people_ids": [person_id],
"start_date": sanitize(row.get("start_date")),
"end_date": sanitize(row.get("end_date")),
"timeoff_notes": notes or None,
"repeat_state": int(repeat_state)
if not pd.isna(repeat_state)
else None,
"repeat_end": repeat_end or None,
"status": int(status_val) if not pd.isna(status_val) else None,
"full_day": full_day if full_day is not None else None,
}
# Add hours only when appropriate
if hours_num is not None:
timeoff_payload["hours"] = hours_num
# Remove None values
timeoff_payload = {k: v for k, v in timeoff_payload.items() if v is not None}
if DRY_RUN:
print(f"[Dry-run] Creating timeoff: {timeoff_payload}")
else:
resp = requests.post(
TIMEOFF_API_URL,
json=timeoff_payload,
headers=HEADERS,
)
if resp.ok:
if SHOW_SUCCESS_OUTPUT:
print(f"β Time off created for Row {row_num}")
else:
raise Exception(
f"POST timeoffs failed: {resp.status_code} - {resp.text}"
)
time.sleep(0.75)
except Exception as e:
msg = str(e)
print(f"β Error in Row {row_num}: {msg}")
log_error(row_num, msg)
# === Completion ===
print("\nπ Time Off Import completed.")
print(f"π Errors (if any) logged to: {ERROR_LOG_FILE}")
with open(ERROR_LOG_FILE, "a") as f:
f.write(
"\nImport completed: "
f"{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} (local), "
f"{datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')} UTC\n"
)
Step 6: Run the script
Open Terminal or Command Prompt.
Navigate to the folder where your files are stored:
cd path/to/your/folder
Run a dry-run test (no data created):
python3 import_timeoff_to_float.py
When everything looks good, open the script and set:
DRY_RUN = False
Then run again:
python3 import_timeoff_to_float.py
βπNote: Make sure to save import_to_float.py in the same directory as your import files or use the full path here
β
Step 7: Review logs
timeoff_success.logβ entries created successfullytimeoff_error.logβ rows that failed (with API error messages)
Example:
Additional Notes:
Full-day vs partial-day:
Use"full_day": 1for a full day. If you include"hours", that takes precedence for partial-day time off.Overlaps with work:
Adding a full-day time off on a date that already has a scheduled allocation will delete that allocation for the day.API limits:
Add small pauses (time.sleep(0.5)) between requests to avoid hitting rate limits.
β
