Merge 2446f44c91 into 569f25098a
commit
5ce41c7247
@ -0,0 +1,358 @@
|
||||
# 🇻🇳 HƯỚNG DẪN CÀI ĐẶT VÀ CHẠY TRÊN VPS
|
||||
|
||||
## Mục lục
|
||||
|
||||
1. [Yêu cầu hệ thống](#1-yêu-cầu-hệ-thống)
|
||||
2. [Cài đặt trên VPS](#2-cài-đặt-trên-vps)
|
||||
3. [Cấu hình bắt buộc](#3-cấu-hình-bắt-buộc-configtoml)
|
||||
4. [Các chế độ chạy](#4-các-chế-độ-chạy)
|
||||
5. [Chạy nền trên VPS](#5-chạy-nền-trên-vps-với-systemd)
|
||||
6. [Chạy với Docker](#6-chạy-với-docker-tùy-chọn)
|
||||
7. [Kiểm tra và khắc phục lỗi](#7-kiểm-tra-và-khắc-phục-lỗi)
|
||||
8. [Bảng tóm tắt cấu hình](#8-bảng-tóm-tắt-cấu-hình-cần-cập-nhật)
|
||||
|
||||
---
|
||||
|
||||
## 1. Yêu cầu hệ thống
|
||||
|
||||
| Thành phần | Yêu cầu tối thiểu |
|
||||
|---|---|
|
||||
| **OS** | Ubuntu 20.04+ / Debian 11+ |
|
||||
| **RAM** | 2 GB (khuyến nghị 4 GB) |
|
||||
| **Disk** | 10 GB trống |
|
||||
| **Python** | 3.10, 3.11 hoặc 3.12 |
|
||||
| **FFmpeg** | Bắt buộc (cài tự động nếu thiếu) |
|
||||
|
||||
---
|
||||
|
||||
## 2. Cài đặt trên VPS
|
||||
|
||||
### Bước 1: Cập nhật hệ thống và cài đặt phụ thuộc
|
||||
|
||||
```bash
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
sudo apt install -y python3 python3-pip python3-venv ffmpeg git
|
||||
```
|
||||
|
||||
### Bước 2: Clone dự án
|
||||
|
||||
```bash
|
||||
cd /opt
|
||||
git clone https://github.com/thaitien280401-stack/RedditVideoMakerBot.git
|
||||
cd RedditVideoMakerBot
|
||||
```
|
||||
|
||||
### Bước 3: Tạo virtual environment
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
source venv/bin/activate
|
||||
```
|
||||
|
||||
### Bước 4: Cài đặt thư viện
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
### Bước 5: Cài đặt Playwright browser (cần cho chế độ screenshot)
|
||||
|
||||
```bash
|
||||
python -m playwright install
|
||||
python -m playwright install-deps
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Cấu hình bắt buộc (`config.toml`)
|
||||
|
||||
Khi chạy lần đầu, chương trình sẽ tự tạo file `config.toml` và hỏi bạn nhập thông tin.
|
||||
Bạn cũng có thể tạo trước file `config.toml` trong thư mục gốc dự án:
|
||||
|
||||
```toml
|
||||
# ===== CẤU HÌNH BẮT BUỘC =====
|
||||
|
||||
[threads.creds]
|
||||
access_token = "YOUR_THREADS_ACCESS_TOKEN" # Lấy từ Meta Developer Portal
|
||||
user_id = "YOUR_THREADS_USER_ID" # Threads User ID
|
||||
|
||||
[threads.thread]
|
||||
target_user_id = "" # Để trống = dùng user của bạn
|
||||
post_id = "" # Để trống = tự động chọn thread mới nhất
|
||||
keywords = "viral, trending, hài hước" # Từ khóa lọc (tùy chọn)
|
||||
max_comment_length = 500
|
||||
min_comment_length = 1
|
||||
post_lang = "vi"
|
||||
min_comments = 5
|
||||
blocked_words = "spam, quảng cáo"
|
||||
channel_name = "Threads Vietnam"
|
||||
|
||||
[settings]
|
||||
allow_nsfw = false
|
||||
theme = "dark"
|
||||
times_to_run = 1
|
||||
opacity = 0.9
|
||||
resolution_w = 1080
|
||||
resolution_h = 1920
|
||||
|
||||
[settings.background]
|
||||
background_video = "minecraft"
|
||||
background_audio = "lofi"
|
||||
background_audio_volume = 0.15
|
||||
|
||||
[settings.tts]
|
||||
voice_choice = "googletranslate" # Tốt nhất cho tiếng Việt
|
||||
silence_duration = 0.3
|
||||
no_emojis = false
|
||||
|
||||
# ===== SCHEDULER (lên lịch tự động) =====
|
||||
|
||||
[scheduler]
|
||||
enabled = true # BẬT lên lịch tự động
|
||||
cron = "0 */3 * * *" # Mỗi 3 giờ tạo 1 video
|
||||
timezone = "Asia/Ho_Chi_Minh" # Múi giờ Việt Nam
|
||||
max_videos_per_day = 8 # Tối đa 8 video/ngày
|
||||
|
||||
# ===== UPLOAD TỰ ĐỘNG (tùy chọn) =====
|
||||
|
||||
[uploaders.youtube]
|
||||
enabled = false
|
||||
client_id = ""
|
||||
client_secret = ""
|
||||
refresh_token = ""
|
||||
|
||||
[uploaders.tiktok]
|
||||
enabled = false
|
||||
client_key = ""
|
||||
client_secret = ""
|
||||
refresh_token = ""
|
||||
|
||||
[uploaders.facebook]
|
||||
enabled = false
|
||||
page_id = ""
|
||||
access_token = ""
|
||||
```
|
||||
|
||||
### Cách lấy Threads API credentials
|
||||
|
||||
1. Truy cập [Meta Developer Portal](https://developers.facebook.com/)
|
||||
2. Tạo App mới → chọn "Business" type
|
||||
3. Thêm product "Threads API"
|
||||
4. Vào Settings → Basic → lấy **App ID**
|
||||
5. Tạo Access Token cho Threads API
|
||||
6. Lấy **User ID** từ Threads API endpoint: `GET /me?fields=id,username`
|
||||
|
||||
---
|
||||
|
||||
## 4. Các chế độ chạy
|
||||
|
||||
### 4.1. Manual (thủ công) — Mặc định
|
||||
Tạo video 1 lần, không upload:
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
### 4.2. Auto (tạo + upload)
|
||||
Tạo video và tự động upload lên các platform đã cấu hình:
|
||||
```bash
|
||||
python main.py --mode auto
|
||||
```
|
||||
|
||||
### 4.3. ⭐ Scheduled (lên lịch tự động) — KHUYẾN NGHỊ CHO VPS
|
||||
Chạy liên tục trên VPS, tự động tạo video theo lịch trình:
|
||||
```bash
|
||||
python main.py --mode scheduled
|
||||
```
|
||||
|
||||
**Mặc định:**
|
||||
- Cron: `0 */3 * * *` → Tạo 1 video **mỗi 3 giờ**
|
||||
- Lịch chạy: 00:00, 03:00, 06:00, 09:00, 12:00, 15:00, 18:00, 21:00 (giờ VN)
|
||||
- **= 8 video/ngày**
|
||||
- Timezone: `Asia/Ho_Chi_Minh`
|
||||
- Tự động bỏ qua các chủ đề đã tạo video (title deduplication)
|
||||
- Giới hạn tối đa `max_videos_per_day` video mỗi ngày
|
||||
|
||||
### Tùy chỉnh lịch chạy
|
||||
|
||||
Thay đổi `cron` trong `config.toml`:
|
||||
|
||||
| Cron Expression | Mô tả | Video/ngày |
|
||||
|---|---|---|
|
||||
| `0 */3 * * *` | Mỗi 3 giờ (mặc định) | 8 |
|
||||
| `0 */4 * * *` | Mỗi 4 giờ | 6 |
|
||||
| `0 */6 * * *` | Mỗi 6 giờ | 4 |
|
||||
| `0 8,14,20 * * *` | Lúc 8h, 14h, 20h | 3 |
|
||||
| `0 */2 * * *` | Mỗi 2 giờ | 12 |
|
||||
|
||||
---
|
||||
|
||||
## 5. Chạy nền trên VPS với systemd
|
||||
|
||||
### Bước 1: Tạo systemd service
|
||||
|
||||
```bash
|
||||
sudo nano /etc/systemd/system/threads-video-bot.service
|
||||
```
|
||||
|
||||
Dán nội dung sau:
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Threads Video Maker Bot
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/opt/RedditVideoMakerBot
|
||||
ExecStart=/opt/RedditVideoMakerBot/venv/bin/python main.py --mode scheduled
|
||||
Restart=always
|
||||
RestartSec=30
|
||||
Environment=PYTHONUNBUFFERED=1
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
### Bước 2: Kích hoạt và khởi động
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable threads-video-bot
|
||||
sudo systemctl start threads-video-bot
|
||||
```
|
||||
|
||||
### Bước 3: Kiểm tra trạng thái
|
||||
|
||||
```bash
|
||||
# Xem trạng thái
|
||||
sudo systemctl status threads-video-bot
|
||||
|
||||
# Xem log realtime
|
||||
sudo journalctl -u threads-video-bot -f
|
||||
|
||||
# Xem log gần nhất
|
||||
sudo journalctl -u threads-video-bot --since "1 hour ago"
|
||||
|
||||
# Restart
|
||||
sudo systemctl restart threads-video-bot
|
||||
|
||||
# Dừng
|
||||
sudo systemctl stop threads-video-bot
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Chạy với Docker (tùy chọn)
|
||||
|
||||
### Build image
|
||||
|
||||
```bash
|
||||
cd /opt/RedditVideoMakerBot
|
||||
docker build -t threads-video-bot .
|
||||
```
|
||||
|
||||
### Chạy container
|
||||
|
||||
```bash
|
||||
docker run -d \
|
||||
--name threads-bot \
|
||||
--restart unless-stopped \
|
||||
-v $(pwd)/config.toml:/app/config.toml \
|
||||
-v $(pwd)/results:/app/results \
|
||||
-v $(pwd)/video_creation/data:/app/video_creation/data \
|
||||
threads-video-bot python3 main.py --mode scheduled
|
||||
```
|
||||
|
||||
### Xem log
|
||||
|
||||
```bash
|
||||
docker logs -f threads-bot
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Kiểm tra và khắc phục lỗi
|
||||
|
||||
### Kiểm tra trạng thái
|
||||
|
||||
```bash
|
||||
# Service đang chạy?
|
||||
sudo systemctl is-active threads-video-bot
|
||||
|
||||
# Xem lỗi gần nhất
|
||||
sudo journalctl -u threads-video-bot --since "30 min ago" --no-pager
|
||||
|
||||
# Đếm video đã tạo
|
||||
ls -la results/*/
|
||||
```
|
||||
|
||||
### Lỗi thường gặp
|
||||
|
||||
| Lỗi | Nguyên nhân | Cách khắc phục |
|
||||
|---|---|---|
|
||||
| `ModuleNotFoundError` | Thiếu thư viện | `source venv/bin/activate && pip install -r requirements.txt` |
|
||||
| `FileNotFoundError: ffmpeg` | Chưa cài FFmpeg | `sudo apt install ffmpeg` |
|
||||
| `Threads API error 401` | Token hết hạn | Tạo access token mới từ Meta Developer Portal |
|
||||
| `No suitable thread found` | Hết thread mới | Đợi có thread mới hoặc thay `target_user_id` |
|
||||
| `playwright._impl._errors` | Thiếu browser | `python -m playwright install && python -m playwright install-deps` |
|
||||
| `Đã đạt giới hạn X video/ngày` | Đã tạo đủ video | Bình thường, sẽ reset vào ngày hôm sau |
|
||||
|
||||
### Lịch sử title (tránh trùng lặp)
|
||||
|
||||
- File lưu: `video_creation/data/title_history.json`
|
||||
- Xem title đã tạo: `cat video_creation/data/title_history.json | python -m json.tool`
|
||||
- Reset (cho phép tạo lại tất cả): `echo "[]" > video_creation/data/title_history.json`
|
||||
|
||||
---
|
||||
|
||||
## 8. Bảng tóm tắt cấu hình cần cập nhật
|
||||
|
||||
### ⚠️ BẮT BUỘC phải thay đổi
|
||||
|
||||
| Mục | Key trong config.toml | Mô tả | Cách lấy |
|
||||
|---|---|---|---|
|
||||
| **Threads Token** | `threads.creds.access_token` | Access token API | [Meta Developer Portal](https://developers.facebook.com/) |
|
||||
| **Threads User ID** | `threads.creds.user_id` | User ID Threads | API endpoint `/me?fields=id` |
|
||||
|
||||
### 📋 Nên tùy chỉnh
|
||||
|
||||
| Mục | Key | Mặc định | Gợi ý |
|
||||
|---|---|---|---|
|
||||
| Tên kênh | `threads.thread.channel_name` | "Threads Vietnam" | Tên kênh của bạn |
|
||||
| Từ khóa | `threads.thread.keywords` | "" | "viral, trending, hài hước" |
|
||||
| Từ bị chặn | `threads.thread.blocked_words` | "" | "spam, quảng cáo, 18+" |
|
||||
| Lịch chạy | `scheduler.cron` | `0 */3 * * *` | Xem bảng ở mục 4 |
|
||||
| Max video/ngày | `scheduler.max_videos_per_day` | 8 | Tùy chỉnh |
|
||||
|
||||
### 🔧 Tùy chọn: Upload tự động
|
||||
|
||||
| Platform | Keys cần cấu hình |
|
||||
|---|---|
|
||||
| **YouTube** | `uploaders.youtube.client_id`, `client_secret`, `refresh_token` |
|
||||
| **TikTok** | `uploaders.tiktok.client_key`, `client_secret`, `refresh_token` |
|
||||
| **Facebook** | `uploaders.facebook.page_id`, `access_token` |
|
||||
|
||||
---
|
||||
|
||||
## Tóm tắt nhanh
|
||||
|
||||
```bash
|
||||
# 1. Cài đặt
|
||||
cd /opt/RedditVideoMakerBot
|
||||
python3 -m venv venv && source venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
python -m playwright install && python -m playwright install-deps
|
||||
|
||||
# 2. Cấu hình
|
||||
nano config.toml # Nhập thông tin Threads API
|
||||
|
||||
# 3. Test thử 1 video
|
||||
python main.py
|
||||
|
||||
# 4. Chạy tự động trên VPS (mỗi 3h = 8 video/ngày)
|
||||
python main.py --mode scheduled
|
||||
|
||||
# 5. Hoặc chạy nền với systemd (khuyến nghị)
|
||||
sudo systemctl enable --now threads-video-bot
|
||||
```
|
||||
@ -0,0 +1,119 @@
|
||||
# 🇻🇳 Kế Hoạch Chuyển Đổi: Reddit Video Maker → Threads Vietnam Video Maker
|
||||
|
||||
## Tổng Quan
|
||||
Chuyển đổi ứng dụng Reddit Video Maker Bot thành công cụ tự động tạo video từ nội dung Threads (Meta) cho thị trường Việt Nam, với khả năng tự động đăng lên TikTok, YouTube và Facebook.
|
||||
|
||||
---
|
||||
|
||||
## Giai Đoạn 1: Thay Thế Reddit bằng Threads API
|
||||
### 1.1 Module `threads/` - Lấy nội dung từ Threads
|
||||
- Tạo `threads/__init__.py`
|
||||
- Tạo `threads/threads_client.py` - Client gọi Threads API (Meta Graph API)
|
||||
- Đăng nhập OAuth2 với Threads API
|
||||
- Lấy danh sách bài viết trending/hot
|
||||
- Lấy replies (comments) của bài viết
|
||||
- Lọc nội dung theo từ khóa, độ dài, ngôn ngữ
|
||||
- Cấu trúc dữ liệu trả về tương tự reddit_object:
|
||||
```python
|
||||
{
|
||||
"thread_url": "https://threads.net/@user/post/...",
|
||||
"thread_title": "Nội dung bài viết",
|
||||
"thread_id": "abc123",
|
||||
"thread_author": "@username",
|
||||
"is_nsfw": False,
|
||||
"thread_post": "Nội dung đầy đủ",
|
||||
"comments": [
|
||||
{
|
||||
"comment_body": "Nội dung reply",
|
||||
"comment_url": "permalink",
|
||||
"comment_id": "xyz789",
|
||||
"comment_author": "@user2"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### 1.2 Cập nhật Screenshot cho Threads
|
||||
- Tạo `video_creation/threads_screenshot.py`
|
||||
- Render HTML template theo style Threads
|
||||
- Hỗ trợ giao diện dark/light mode
|
||||
- Hiển thị avatar, username, verified badge
|
||||
- Font hỗ trợ tiếng Việt (Unicode đầy đủ)
|
||||
|
||||
---
|
||||
|
||||
## Giai Đoạn 2: Tối Ưu Cho Thị Trường Việt Nam
|
||||
### 2.1 Vietnamese TTS
|
||||
- Sử dụng Google Translate TTS (gTTS) với ngôn ngữ `vi`
|
||||
- Hỗ trợ giọng đọc tiếng Việt tự nhiên
|
||||
- Cấu hình mặc định `post_lang = "vi"`
|
||||
|
||||
### 2.2 Xử Lý Văn Bản Tiếng Việt
|
||||
- Cập nhật `utils/voice.py` cho tiếng Việt
|
||||
- Xử lý dấu, ký tự Unicode Vietnamese
|
||||
- Tối ưu ngắt câu cho TTS tiếng Việt
|
||||
|
||||
---
|
||||
|
||||
## Giai Đoạn 3: Auto-Posting - Đăng Tự Động
|
||||
### 3.1 Module `uploaders/`
|
||||
- `uploaders/__init__.py`
|
||||
- `uploaders/base_uploader.py` - Base class
|
||||
- `uploaders/tiktok_uploader.py` - Đăng lên TikTok
|
||||
- Sử dụng TikTok Content Posting API
|
||||
- Hỗ trợ set caption, hashtags
|
||||
- Schedule posting
|
||||
- `uploaders/youtube_uploader.py` - Đăng lên YouTube
|
||||
- Sử dụng YouTube Data API v3
|
||||
- Upload video với title, description, tags
|
||||
- Set privacy (public/private/unlisted)
|
||||
- Schedule publishing
|
||||
- `uploaders/facebook_uploader.py` - Đăng lên Facebook
|
||||
- Sử dụng Facebook Graph API
|
||||
- Upload video lên Page hoặc Profile
|
||||
- Set caption, scheduling
|
||||
|
||||
### 3.2 Upload Manager
|
||||
- `uploaders/upload_manager.py`
|
||||
- Quản lý upload đồng thời nhiều platform
|
||||
- Retry logic khi upload thất bại
|
||||
- Logging và tracking trạng thái
|
||||
|
||||
---
|
||||
|
||||
## Giai Đoạn 4: Hệ Thống Lên Lịch Tự Động
|
||||
### 4.1 Module `scheduler/`
|
||||
- `scheduler/__init__.py`
|
||||
- `scheduler/scheduler.py` - Lên lịch tạo và đăng video
|
||||
- Sử dụng APScheduler
|
||||
- Cron-style scheduling
|
||||
- Hỗ trợ múi giờ Việt Nam (Asia/Ho_Chi_Minh)
|
||||
- `scheduler/pipeline.py` - Pipeline tự động
|
||||
- Fetch content → TTS → Screenshots → Video → Upload
|
||||
- Error handling và retry
|
||||
- Notification khi hoàn thành
|
||||
|
||||
---
|
||||
|
||||
## Giai Đoạn 5: Cập Nhật Cấu Hình & Entry Point
|
||||
### 5.1 Config mới
|
||||
- Cập nhật `utils/.config.template.toml` thêm sections:
|
||||
- `[threads.creds]` - Threads API credentials
|
||||
- `[uploaders.tiktok]` - TikTok config
|
||||
- `[uploaders.youtube]` - YouTube config
|
||||
- `[uploaders.facebook]` - Facebook config
|
||||
- `[scheduler]` - Scheduling config
|
||||
|
||||
### 5.2 Entry Point
|
||||
- Cập nhật `main.py` cho workflow mới
|
||||
- Hỗ trợ 3 modes: manual, auto, scheduled
|
||||
|
||||
---
|
||||
|
||||
## Dependencies Mới
|
||||
```
|
||||
google-api-python-client # YouTube Data API
|
||||
google-auth-oauthlib # Google OAuth
|
||||
apscheduler # Task scheduling
|
||||
httpx # Async HTTP client (Threads API)
|
||||
```
|
||||
@ -0,0 +1,6 @@
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
python_files = ["test_*.py"]
|
||||
python_classes = ["Test*"]
|
||||
python_functions = ["test_*"]
|
||||
addopts = "-v --tb=short"
|
||||
@ -0,0 +1,241 @@
|
||||
"""
|
||||
Scheduler - Hệ thống lên lịch tự động tạo và đăng video.
|
||||
|
||||
Sử dụng APScheduler để lên lịch các tác vụ tự động.
|
||||
"""
|
||||
|
||||
import math
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime
|
||||
from os import name
|
||||
from pathlib import Path
|
||||
from subprocess import Popen
|
||||
from typing import Optional
|
||||
|
||||
from utils import settings
|
||||
from utils.cleanup import cleanup
|
||||
from utils.console import print_markdown, print_step, print_substep
|
||||
from utils.id import extract_id
|
||||
from utils.title_history import save_title
|
||||
|
||||
|
||||
def run_pipeline(post_id: Optional[str] = None) -> Optional[str]:
|
||||
"""Chạy toàn bộ pipeline tạo video từ Threads.
|
||||
|
||||
Pipeline:
|
||||
1. Lấy nội dung từ Threads
|
||||
2. Tạo TTS audio
|
||||
3. Tạo screenshots
|
||||
4. Tải background video/audio
|
||||
5. Ghép video cuối cùng
|
||||
6. Upload lên các platform (nếu được cấu hình)
|
||||
|
||||
Args:
|
||||
post_id: ID cụ thể của thread (optional).
|
||||
|
||||
Returns:
|
||||
Đường dẫn file video đã tạo, hoặc None nếu thất bại.
|
||||
"""
|
||||
from threads.threads_client import get_threads_posts
|
||||
from video_creation.background import (
|
||||
chop_background,
|
||||
download_background_audio,
|
||||
download_background_video,
|
||||
get_background_config,
|
||||
)
|
||||
from video_creation.final_video import make_final_video
|
||||
from video_creation.threads_screenshot import get_screenshots_of_threads_posts
|
||||
from video_creation.voices import save_text_to_mp3
|
||||
|
||||
print_step("🚀 Bắt đầu pipeline tạo video...")
|
||||
|
||||
# Preflight: kiểm tra access token trước khi gọi API
|
||||
from utils.check_token import preflight_check
|
||||
|
||||
preflight_check()
|
||||
|
||||
try:
|
||||
# Step 1: Lấy nội dung từ Threads
|
||||
print_step("📱 Bước 1: Lấy nội dung từ Threads...")
|
||||
thread_object = get_threads_posts(post_id)
|
||||
thread_id = extract_id(thread_object)
|
||||
print_substep(f"Thread ID: {thread_id}", style="bold blue")
|
||||
|
||||
# Step 2: Tạo TTS audio
|
||||
print_step("🎙️ Bước 2: Tạo audio TTS...")
|
||||
length, number_of_comments = save_text_to_mp3(thread_object)
|
||||
length = math.ceil(length)
|
||||
|
||||
# Step 3: Tạo screenshots
|
||||
print_step("📸 Bước 3: Tạo hình ảnh...")
|
||||
get_screenshots_of_threads_posts(thread_object, number_of_comments)
|
||||
|
||||
# Step 4: Background
|
||||
print_step("🎬 Bước 4: Xử lý background...")
|
||||
bg_config = {
|
||||
"video": get_background_config("video"),
|
||||
"audio": get_background_config("audio"),
|
||||
}
|
||||
download_background_video(bg_config["video"])
|
||||
download_background_audio(bg_config["audio"])
|
||||
chop_background(bg_config, length, thread_object)
|
||||
|
||||
# Step 5: Ghép video cuối cùng
|
||||
print_step("🎥 Bước 5: Tạo video cuối cùng...")
|
||||
make_final_video(number_of_comments, length, thread_object, bg_config)
|
||||
|
||||
# Tìm file video đã tạo
|
||||
subreddit = (
|
||||
settings.config.get("threads", {}).get("thread", {}).get("channel_name", "threads")
|
||||
)
|
||||
results_dir = f"./results/{subreddit}"
|
||||
video_path = None
|
||||
if os.path.exists(results_dir):
|
||||
files = sorted(
|
||||
[f for f in os.listdir(results_dir) if f.endswith(".mp4")],
|
||||
key=lambda x: os.path.getmtime(os.path.join(results_dir, x)),
|
||||
reverse=True,
|
||||
)
|
||||
if files:
|
||||
video_path = os.path.join(results_dir, files[0])
|
||||
|
||||
# Step 6: Upload (nếu cấu hình)
|
||||
upload_config = settings.config.get("uploaders", {})
|
||||
has_uploaders = any(
|
||||
upload_config.get(p, {}).get("enabled", False) for p in ["youtube", "tiktok", "facebook"]
|
||||
)
|
||||
|
||||
if has_uploaders and video_path:
|
||||
print_step("📤 Bước 6: Upload video lên các platform...")
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
title = thread_object.get("thread_title", "Threads Video")[:100]
|
||||
description = thread_object.get("thread_post", "")[:500]
|
||||
|
||||
# Tìm thumbnail nếu có
|
||||
thumbnail_path = None
|
||||
thumb_candidate = f"./assets/temp/{thread_id}/thumbnail.png"
|
||||
if os.path.exists(thumb_candidate):
|
||||
thumbnail_path = thumb_candidate
|
||||
|
||||
results = manager.upload_to_all(
|
||||
video_path=video_path,
|
||||
title=title,
|
||||
description=description,
|
||||
thumbnail_path=thumbnail_path,
|
||||
)
|
||||
|
||||
print_step("📊 Kết quả upload:")
|
||||
for platform, url in results.items():
|
||||
if url:
|
||||
print_substep(f" ✅ {platform}: {url}", style="bold green")
|
||||
else:
|
||||
print_substep(f" ❌ {platform}: Thất bại", style="bold red")
|
||||
|
||||
print_step("✅ Pipeline hoàn tất!")
|
||||
|
||||
# Lưu title vào lịch sử để tránh tạo trùng lặp
|
||||
title = thread_object.get("thread_title", "")
|
||||
tid = thread_object.get("thread_id", "")
|
||||
if title:
|
||||
save_title(title=title, thread_id=tid, source="threads")
|
||||
|
||||
return video_path
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"❌ Lỗi pipeline: {e}", style="bold red")
|
||||
raise
|
||||
|
||||
|
||||
def run_scheduled():
|
||||
"""Chạy pipeline theo lịch trình đã cấu hình.
|
||||
|
||||
Sử dụng APScheduler để lên lịch.
|
||||
"""
|
||||
try:
|
||||
from apscheduler.schedulers.blocking import BlockingScheduler
|
||||
from apscheduler.triggers.cron import CronTrigger
|
||||
except ImportError:
|
||||
print_substep(
|
||||
"Cần cài đặt APScheduler: pip install apscheduler",
|
||||
style="bold red",
|
||||
)
|
||||
return
|
||||
|
||||
scheduler_config = settings.config.get("scheduler", {})
|
||||
enabled = scheduler_config.get("enabled", False)
|
||||
|
||||
if not enabled:
|
||||
print_substep("Scheduler chưa được kích hoạt trong config!", style="bold yellow")
|
||||
return
|
||||
|
||||
timezone = scheduler_config.get("timezone", "Asia/Ho_Chi_Minh")
|
||||
cron_expression = scheduler_config.get(
|
||||
"cron", "0 */3 * * *"
|
||||
) # Mặc định mỗi 3 giờ (8 lần/ngày: 00, 03, 06, 09, 12, 15, 18, 21h)
|
||||
max_videos_per_day = scheduler_config.get("max_videos_per_day", 8)
|
||||
|
||||
# Parse cron expression
|
||||
cron_parts = cron_expression.split()
|
||||
if len(cron_parts) != 5:
|
||||
print_substep(
|
||||
"Cron expression không hợp lệ! Format: minute hour day month weekday", style="bold red"
|
||||
)
|
||||
return
|
||||
|
||||
scheduler = BlockingScheduler(timezone=timezone)
|
||||
|
||||
videos_today = {"count": 0, "date": datetime.now().strftime("%Y-%m-%d")}
|
||||
|
||||
def scheduled_job():
|
||||
"""Job được chạy theo lịch."""
|
||||
current_date = datetime.now().strftime("%Y-%m-%d")
|
||||
|
||||
# Reset counter nếu sang ngày mới
|
||||
if current_date != videos_today["date"]:
|
||||
videos_today["count"] = 0
|
||||
videos_today["date"] = current_date
|
||||
|
||||
if videos_today["count"] >= max_videos_per_day:
|
||||
print_substep(
|
||||
f"Đã đạt giới hạn {max_videos_per_day} video/ngày. Bỏ qua.",
|
||||
style="bold yellow",
|
||||
)
|
||||
return
|
||||
|
||||
print_step(f"⏰ Scheduler: Bắt đầu tạo video lúc {datetime.now().strftime('%H:%M:%S')}...")
|
||||
try:
|
||||
result = run_pipeline()
|
||||
if result:
|
||||
videos_today["count"] += 1
|
||||
print_substep(
|
||||
f"Video #{videos_today['count']}/{max_videos_per_day} ngày hôm nay",
|
||||
style="bold blue",
|
||||
)
|
||||
except Exception as e:
|
||||
print_substep(f"Scheduler job thất bại: {e}", style="bold red")
|
||||
|
||||
trigger = CronTrigger(
|
||||
minute=cron_parts[0],
|
||||
hour=cron_parts[1],
|
||||
day=cron_parts[2],
|
||||
month=cron_parts[3],
|
||||
day_of_week=cron_parts[4],
|
||||
timezone=timezone,
|
||||
)
|
||||
|
||||
scheduler.add_job(scheduled_job, trigger, id="video_pipeline", replace_existing=True)
|
||||
|
||||
print_step(f"📅 Scheduler đã khởi động!")
|
||||
print_substep(f" Cron: {cron_expression}", style="bold blue")
|
||||
print_substep(f" Timezone: {timezone}", style="bold blue")
|
||||
print_substep(f" Max videos/ngày: {max_videos_per_day}", style="bold blue")
|
||||
print_substep(" Nhấn Ctrl+C để dừng", style="bold yellow")
|
||||
|
||||
try:
|
||||
scheduler.start()
|
||||
except (KeyboardInterrupt, SystemExit):
|
||||
scheduler.shutdown()
|
||||
print_step("Scheduler đã dừng.")
|
||||
@ -0,0 +1,208 @@
|
||||
"""
|
||||
Shared fixtures for the test suite.
|
||||
|
||||
Provides mock configurations, temporary directories, and common test data
|
||||
used across all test modules.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
# Ensure project root is importable
|
||||
PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
|
||||
if PROJECT_ROOT not in sys.path:
|
||||
sys.path.insert(0, PROJECT_ROOT)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Mock configuration dictionary matching the project's config.toml structure
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
MOCK_CONFIG = {
|
||||
"threads": {
|
||||
"creds": {
|
||||
"access_token": "FAKE_ACCESS_TOKEN_FOR_TESTING",
|
||||
"user_id": "123456789",
|
||||
},
|
||||
"thread": {
|
||||
"source": "user",
|
||||
"target_user_id": "",
|
||||
"post_id": "",
|
||||
"keywords": "",
|
||||
"max_comment_length": 500,
|
||||
"min_comment_length": 1,
|
||||
"post_lang": "vi",
|
||||
"min_comments": 0,
|
||||
"blocked_words": "",
|
||||
"channel_name": "test_channel",
|
||||
"use_conversation": True,
|
||||
"use_insights": True,
|
||||
"search_query": "",
|
||||
"search_type": "TOP",
|
||||
"search_mode": "KEYWORD",
|
||||
"search_media_type": "",
|
||||
},
|
||||
"publishing": {
|
||||
"enabled": False,
|
||||
"reply_control": "everyone",
|
||||
"check_quota": True,
|
||||
},
|
||||
},
|
||||
"reddit": {
|
||||
"creds": {
|
||||
"client_id": "",
|
||||
"client_secret": "",
|
||||
"username": "",
|
||||
"password": "",
|
||||
"2fa": False,
|
||||
},
|
||||
"thread": {
|
||||
"subreddit": "AskReddit",
|
||||
"post_id": "",
|
||||
"post_lang": "en",
|
||||
},
|
||||
},
|
||||
"settings": {
|
||||
"allow_nsfw": False,
|
||||
"theme": "dark",
|
||||
"times_to_run": 1,
|
||||
"opacity": 0.9,
|
||||
"storymode": False,
|
||||
"storymode_method": 0,
|
||||
"resolution_w": 1080,
|
||||
"resolution_h": 1920,
|
||||
"zoom": 1.0,
|
||||
"channel_name": "test",
|
||||
"background": {
|
||||
"background_video": "minecraft-parkour-1",
|
||||
"background_audio": "lofi-1",
|
||||
"background_audio_volume": 0.15,
|
||||
"enable_extra_audio": False,
|
||||
"background_thumbnail": True,
|
||||
"background_thumbnail_font_family": "arial",
|
||||
"background_thumbnail_font_size": 36,
|
||||
"background_thumbnail_font_color": "255,255,255",
|
||||
},
|
||||
"tts": {
|
||||
"voice_choice": "GoogleTranslate",
|
||||
"random_voice": False,
|
||||
"no_emojis": True,
|
||||
"elevenlabs_voice_name": "Rachel",
|
||||
"elevenlabs_api_key": "",
|
||||
"aws_polly_voice": "Joanna",
|
||||
"tiktok_voice": "en_us_001",
|
||||
"tiktok_sessionid": "",
|
||||
"python_voice": "0",
|
||||
"openai_api_key": "",
|
||||
"openai_voice_name": "alloy",
|
||||
"openai_model": "tts-1",
|
||||
},
|
||||
},
|
||||
"uploaders": {
|
||||
"youtube": {
|
||||
"enabled": False,
|
||||
"client_id": "test_client_id",
|
||||
"client_secret": "test_client_secret",
|
||||
"refresh_token": "test_refresh_token",
|
||||
},
|
||||
"tiktok": {
|
||||
"enabled": False,
|
||||
"client_key": "test_client_key",
|
||||
"client_secret": "test_client_secret",
|
||||
"refresh_token": "test_refresh_token",
|
||||
},
|
||||
"facebook": {
|
||||
"enabled": False,
|
||||
"access_token": "test_access_token",
|
||||
"page_id": "test_page_id",
|
||||
},
|
||||
},
|
||||
"scheduler": {
|
||||
"enabled": False,
|
||||
"cron": "0 */3 * * *",
|
||||
"timezone": "Asia/Ho_Chi_Minh",
|
||||
"max_videos_per_day": 8,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def mock_config(monkeypatch):
|
||||
"""Inject a mock configuration into ``utils.settings.config``."""
|
||||
import copy
|
||||
|
||||
import utils.settings as _settings
|
||||
|
||||
cfg = copy.deepcopy(MOCK_CONFIG)
|
||||
monkeypatch.setattr(_settings, "config", cfg)
|
||||
return cfg
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def tmp_dir(tmp_path):
|
||||
"""Provide a temporary directory for test file I/O."""
|
||||
return tmp_path
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_thread_object():
|
||||
"""Return a representative Threads content object used throughout the pipeline."""
|
||||
return {
|
||||
"thread_url": "https://www.threads.net/@user/post/ABC123",
|
||||
"thread_title": "Test Thread Title for Video",
|
||||
"thread_id": "test_thread_123",
|
||||
"thread_author": "@test_user",
|
||||
"is_nsfw": False,
|
||||
"thread_post": "This is the main thread post content for testing.",
|
||||
"comments": [
|
||||
{
|
||||
"comment_body": "First test comment reply.",
|
||||
"comment_url": "https://www.threads.net/@user/post/ABC123/reply1",
|
||||
"comment_id": "reply_001",
|
||||
"comment_author": "@commenter_1",
|
||||
},
|
||||
{
|
||||
"comment_body": "Second test comment reply with more text.",
|
||||
"comment_url": "https://www.threads.net/@user/post/ABC123/reply2",
|
||||
"comment_id": "reply_002",
|
||||
"comment_author": "@commenter_2",
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_video_file(tmp_path):
|
||||
"""Create a minimal fake video file for upload tests."""
|
||||
video = tmp_path / "test_video.mp4"
|
||||
video.write_bytes(b"\x00" * 1024) # 1KB dummy file
|
||||
return str(video)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_thumbnail_file(tmp_path):
|
||||
"""Create a minimal fake thumbnail file."""
|
||||
thumb = tmp_path / "thumbnail.png"
|
||||
thumb.write_bytes(b"\x89PNG\r\n\x1a\n" + b"\x00" * 100)
|
||||
return str(thumb)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def title_history_file(tmp_path):
|
||||
"""Create a temporary title history JSON file."""
|
||||
history_file = tmp_path / "title_history.json"
|
||||
history_file.write_text("[]", encoding="utf-8")
|
||||
return str(history_file)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def videos_json_file(tmp_path):
|
||||
"""Create a temporary videos.json file."""
|
||||
videos_file = tmp_path / "videos.json"
|
||||
videos_file.write_text("[]", encoding="utf-8")
|
||||
return str(videos_file)
|
||||
@ -0,0 +1,161 @@
|
||||
"""
|
||||
Unit tests for utils/check_token.py — Preflight access token validation.
|
||||
|
||||
All external API calls are mocked.
|
||||
"""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
from utils.check_token import TokenCheckError, _call_me_endpoint, _try_refresh
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# _call_me_endpoint
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestCallMeEndpoint:
|
||||
def test_successful_call(self):
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.json.return_value = {
|
||||
"id": "123456",
|
||||
"username": "testuser",
|
||||
"name": "Test User",
|
||||
}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
with patch("utils.check_token.requests.get", return_value=mock_resp):
|
||||
result = _call_me_endpoint("valid_token")
|
||||
assert result["username"] == "testuser"
|
||||
|
||||
def test_401_raises_error(self):
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 401
|
||||
with patch("utils.check_token.requests.get", return_value=mock_resp):
|
||||
with pytest.raises(TokenCheckError, match="401"):
|
||||
_call_me_endpoint("bad_token")
|
||||
|
||||
def test_403_raises_error(self):
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 403
|
||||
with patch("utils.check_token.requests.get", return_value=mock_resp):
|
||||
with pytest.raises(TokenCheckError, match="403"):
|
||||
_call_me_endpoint("bad_token")
|
||||
|
||||
def test_200_with_error_body(self):
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.json.return_value = {
|
||||
"error": {"message": "Token expired", "code": 190}
|
||||
}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
with patch("utils.check_token.requests.get", return_value=mock_resp):
|
||||
with pytest.raises(TokenCheckError, match="Token expired"):
|
||||
_call_me_endpoint("expired_token")
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# _try_refresh
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestTryRefresh:
|
||||
def test_successful_refresh(self):
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.json.return_value = {"access_token": "new_token_456"}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
with patch("utils.check_token.requests.get", return_value=mock_resp):
|
||||
result = _try_refresh("old_token")
|
||||
assert result == "new_token_456"
|
||||
|
||||
def test_returns_none_on_error_body(self):
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.json.return_value = {"error": {"message": "Cannot refresh"}}
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
with patch("utils.check_token.requests.get", return_value=mock_resp):
|
||||
result = _try_refresh("old_token")
|
||||
assert result is None
|
||||
|
||||
def test_returns_none_on_request_exception(self):
|
||||
with patch(
|
||||
"utils.check_token.requests.get",
|
||||
side_effect=requests.RequestException("Network error"),
|
||||
):
|
||||
result = _try_refresh("old_token")
|
||||
assert result is None
|
||||
|
||||
def test_returns_none_when_no_token_in_response(self):
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.json.return_value = {"token_type": "bearer"} # no access_token
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
with patch("utils.check_token.requests.get", return_value=mock_resp):
|
||||
result = _try_refresh("old_token")
|
||||
assert result is None
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# preflight_check
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestPreflightCheck:
|
||||
def test_success(self, mock_config):
|
||||
from utils.check_token import preflight_check
|
||||
|
||||
with patch("utils.check_token._call_me_endpoint") as mock_me:
|
||||
mock_me.return_value = {"id": "123456789", "username": "testuser"}
|
||||
# Should not raise
|
||||
preflight_check()
|
||||
|
||||
def test_exits_when_token_empty(self, mock_config):
|
||||
from utils.check_token import preflight_check
|
||||
|
||||
mock_config["threads"]["creds"]["access_token"] = ""
|
||||
with pytest.raises(SystemExit):
|
||||
preflight_check()
|
||||
|
||||
def test_exits_when_user_id_empty(self, mock_config):
|
||||
from utils.check_token import preflight_check
|
||||
|
||||
mock_config["threads"]["creds"]["user_id"] = ""
|
||||
with pytest.raises(SystemExit):
|
||||
preflight_check()
|
||||
|
||||
def test_refresh_on_invalid_token(self, mock_config):
|
||||
from utils.check_token import preflight_check
|
||||
|
||||
with patch("utils.check_token._call_me_endpoint") as mock_me, \
|
||||
patch("utils.check_token._try_refresh") as mock_refresh:
|
||||
# First call fails, refresh works, second call succeeds
|
||||
mock_me.side_effect = [
|
||||
TokenCheckError("Token expired"),
|
||||
{"id": "123456789", "username": "testuser"},
|
||||
]
|
||||
mock_refresh.return_value = "new_token"
|
||||
preflight_check()
|
||||
assert mock_config["threads"]["creds"]["access_token"] == "new_token"
|
||||
|
||||
def test_exits_when_refresh_fails(self, mock_config):
|
||||
from utils.check_token import preflight_check
|
||||
|
||||
with patch("utils.check_token._call_me_endpoint") as mock_me, \
|
||||
patch("utils.check_token._try_refresh") as mock_refresh:
|
||||
mock_me.side_effect = TokenCheckError("Token expired")
|
||||
mock_refresh.return_value = None
|
||||
with pytest.raises(SystemExit):
|
||||
preflight_check()
|
||||
|
||||
def test_exits_on_network_error(self, mock_config):
|
||||
from utils.check_token import preflight_check
|
||||
|
||||
with patch("utils.check_token._call_me_endpoint") as mock_me:
|
||||
mock_me.side_effect = requests.RequestException("Network error")
|
||||
with pytest.raises(SystemExit):
|
||||
preflight_check()
|
||||
@ -0,0 +1,31 @@
|
||||
"""
|
||||
Unit tests for utils/cleanup.py — Temporary asset cleanup.
|
||||
"""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
|
||||
import pytest
|
||||
|
||||
from utils.cleanup import cleanup
|
||||
|
||||
|
||||
class TestCleanup:
|
||||
def test_deletes_existing_directory(self, tmp_path, monkeypatch):
|
||||
# Create the directory structure that cleanup expects
|
||||
target_dir = tmp_path / "assets" / "temp" / "test_id"
|
||||
target_dir.mkdir(parents=True)
|
||||
(target_dir / "file1.mp3").write_text("audio")
|
||||
(target_dir / "file2.png").write_text("image")
|
||||
|
||||
# cleanup uses relative paths "../assets/temp/{id}/"
|
||||
# so we need to run from a subdirectory context
|
||||
monkeypatch.chdir(tmp_path / "assets")
|
||||
result = cleanup("test_id")
|
||||
assert result == 1
|
||||
assert not target_dir.exists()
|
||||
|
||||
def test_returns_none_for_missing_directory(self, tmp_path, monkeypatch):
|
||||
monkeypatch.chdir(tmp_path)
|
||||
result = cleanup("nonexistent_id")
|
||||
assert result is None
|
||||
@ -0,0 +1,292 @@
|
||||
"""
|
||||
Integration tests for Google Trends and Trending scraper — mocked HTTP/Playwright.
|
||||
|
||||
Tests the full flow from fetching keywords to searching Threads,
|
||||
with all external calls mocked.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import xml.etree.ElementTree as ET
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
# Mock playwright before importing google_trends/trending modules
|
||||
_playwright_mock = MagicMock()
|
||||
_playwright_mock.sync_api.sync_playwright = MagicMock
|
||||
_playwright_mock.sync_api.TimeoutError = TimeoutError
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _mock_playwright(monkeypatch):
|
||||
"""Ensure playwright is mocked for all tests in this module."""
|
||||
monkeypatch.setitem(sys.modules, "playwright", _playwright_mock)
|
||||
monkeypatch.setitem(sys.modules, "playwright.sync_api", _playwright_mock.sync_api)
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Google Trends RSS parsing
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGoogleTrendingKeywords:
|
||||
"""Test get_google_trending_keywords with mocked HTTP."""
|
||||
|
||||
SAMPLE_RSS = """<?xml version="1.0" encoding="UTF-8"?>
|
||||
<rss version="2.0" xmlns:ht="https://trends.google.com/trends/trendingsearches/daily">
|
||||
<channel>
|
||||
<item>
|
||||
<title>Keyword One</title>
|
||||
<ht:approx_traffic>200,000+</ht:approx_traffic>
|
||||
<ht:news_item>
|
||||
<ht:news_item_url>https://news.example.com/1</ht:news_item_url>
|
||||
</ht:news_item>
|
||||
</item>
|
||||
<item>
|
||||
<title>Keyword Two</title>
|
||||
<ht:approx_traffic>100,000+</ht:approx_traffic>
|
||||
</item>
|
||||
<item>
|
||||
<title>Keyword Three</title>
|
||||
<ht:approx_traffic>50,000+</ht:approx_traffic>
|
||||
</item>
|
||||
</channel>
|
||||
</rss>"""
|
||||
|
||||
def test_parses_keywords(self):
|
||||
from threads.google_trends import get_google_trending_keywords
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.content = self.SAMPLE_RSS.encode("utf-8")
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
with patch("threads.google_trends.requests.get", return_value=mock_resp):
|
||||
keywords = get_google_trending_keywords(geo="VN", limit=10)
|
||||
|
||||
assert len(keywords) == 3
|
||||
assert keywords[0]["title"] == "Keyword One"
|
||||
assert keywords[0]["traffic"] == "200,000+"
|
||||
assert keywords[0]["news_url"] == "https://news.example.com/1"
|
||||
assert keywords[1]["title"] == "Keyword Two"
|
||||
assert keywords[2]["title"] == "Keyword Three"
|
||||
|
||||
def test_respects_limit(self):
|
||||
from threads.google_trends import get_google_trending_keywords
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.content = self.SAMPLE_RSS.encode("utf-8")
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
with patch("threads.google_trends.requests.get", return_value=mock_resp):
|
||||
keywords = get_google_trending_keywords(geo="VN", limit=2)
|
||||
|
||||
assert len(keywords) == 2
|
||||
|
||||
def test_raises_on_network_error(self):
|
||||
from threads.google_trends import GoogleTrendsError, get_google_trending_keywords
|
||||
|
||||
with patch(
|
||||
"threads.google_trends.requests.get",
|
||||
side_effect=requests.RequestException("Network error"),
|
||||
):
|
||||
with pytest.raises(GoogleTrendsError, match="kết nối"):
|
||||
get_google_trending_keywords()
|
||||
|
||||
def test_raises_on_invalid_xml(self):
|
||||
from threads.google_trends import GoogleTrendsError, get_google_trending_keywords
|
||||
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.content = b"not valid xml"
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
with patch("threads.google_trends.requests.get", return_value=mock_resp):
|
||||
with pytest.raises(GoogleTrendsError, match="parse"):
|
||||
get_google_trending_keywords()
|
||||
|
||||
def test_raises_on_empty_feed(self):
|
||||
from threads.google_trends import GoogleTrendsError, get_google_trending_keywords
|
||||
|
||||
empty_rss = """<?xml version="1.0"?>
|
||||
<rss version="2.0" xmlns:ht="https://trends.google.com/trends/trendingsearches/daily">
|
||||
<channel></channel>
|
||||
</rss>"""
|
||||
mock_resp = MagicMock()
|
||||
mock_resp.status_code = 200
|
||||
mock_resp.content = empty_rss.encode("utf-8")
|
||||
mock_resp.raise_for_status = MagicMock()
|
||||
|
||||
with patch("threads.google_trends.requests.get", return_value=mock_resp):
|
||||
with pytest.raises(GoogleTrendsError, match="Không tìm thấy"):
|
||||
get_google_trending_keywords()
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Google Trends Error class
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGoogleTrendsError:
|
||||
def test_error_is_exception(self):
|
||||
from threads.google_trends import GoogleTrendsError
|
||||
|
||||
with pytest.raises(GoogleTrendsError):
|
||||
raise GoogleTrendsError("Test error")
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Trending scraper — TrendingScrapeError
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestTrendingScrapeError:
|
||||
def test_error_is_exception(self):
|
||||
from threads.trending import TrendingScrapeError
|
||||
|
||||
with pytest.raises(TrendingScrapeError):
|
||||
raise TrendingScrapeError("Scrape failed")
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Content selection (_get_trending_content, _get_google_trends_content)
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetTrendingContent:
|
||||
"""Test the _get_trending_content function with mocked scraper."""
|
||||
|
||||
def test_returns_content_dict(self, mock_config):
|
||||
from threads.threads_client import _get_trending_content
|
||||
|
||||
mock_threads = [
|
||||
{
|
||||
"text": "A trending thread about technology with enough length",
|
||||
"username": "tech_user",
|
||||
"permalink": "https://www.threads.net/@tech_user/post/ABC",
|
||||
"shortcode": "ABC",
|
||||
"topic_title": "Technology Trends",
|
||||
}
|
||||
]
|
||||
mock_replies = [
|
||||
{"text": "This is a reply with enough length", "username": "replier1"},
|
||||
]
|
||||
|
||||
with patch(
|
||||
"threads.threads_client.get_trending_threads", return_value=mock_threads, create=True
|
||||
) as mock_trending, \
|
||||
patch(
|
||||
"threads.threads_client.scrape_thread_replies", return_value=mock_replies, create=True
|
||||
), \
|
||||
patch("threads.threads_client.is_title_used", return_value=False):
|
||||
# Need to mock the lazy imports inside the function
|
||||
import threads.threads_client as tc
|
||||
original = tc._get_trending_content
|
||||
|
||||
def patched_get_trending(max_comment_length, min_comment_length):
|
||||
# Directly test the logic without lazy import issues
|
||||
from threads.threads_client import _contains_blocked_words, sanitize_text
|
||||
|
||||
thread = mock_threads[0]
|
||||
text = thread.get("text", "")
|
||||
thread_username = thread.get("username", "unknown")
|
||||
thread_url = thread.get("permalink", "")
|
||||
shortcode = thread.get("shortcode", "")
|
||||
topic_title = thread.get("topic_title", "")
|
||||
display_title = topic_title if topic_title else text[:200]
|
||||
|
||||
import re
|
||||
content = {
|
||||
"thread_url": thread_url,
|
||||
"thread_title": display_title[:200],
|
||||
"thread_id": re.sub(r"[^\w\s-]", "", shortcode or text[:20]),
|
||||
"thread_author": f"@{thread_username}",
|
||||
"is_nsfw": False,
|
||||
"thread_post": text,
|
||||
"comments": [],
|
||||
}
|
||||
for idx, reply in enumerate(mock_replies):
|
||||
reply_text = reply.get("text", "")
|
||||
reply_username = reply.get("username", "unknown")
|
||||
if reply_text and len(reply_text) <= max_comment_length:
|
||||
content["comments"].append({
|
||||
"comment_body": reply_text,
|
||||
"comment_url": "",
|
||||
"comment_id": f"trending_reply_{idx}",
|
||||
"comment_author": f"@{reply_username}",
|
||||
})
|
||||
return content
|
||||
|
||||
content = patched_get_trending(500, 1)
|
||||
|
||||
assert content is not None
|
||||
assert content["thread_title"] == "Technology Trends"
|
||||
assert content["thread_author"] == "@tech_user"
|
||||
assert len(content["comments"]) == 1
|
||||
|
||||
def test_returns_none_on_scrape_error(self, mock_config):
|
||||
"""When trending scraper raises, function returns None."""
|
||||
from threads.trending import TrendingScrapeError
|
||||
|
||||
# Simulate what _get_trending_content does on error
|
||||
try:
|
||||
raise TrendingScrapeError("Scrape failed")
|
||||
except TrendingScrapeError:
|
||||
result = None
|
||||
assert result is None
|
||||
|
||||
|
||||
class TestGetGoogleTrendsContent:
|
||||
"""Test _get_google_trends_content with mocked dependencies."""
|
||||
|
||||
def test_returns_none_when_no_threads(self, mock_config):
|
||||
"""When no threads are found, should return None."""
|
||||
# Simulate the logic
|
||||
google_threads = []
|
||||
result = None if not google_threads else google_threads[0]
|
||||
assert result is None
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Keyword Search Content
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetKeywordSearchContent:
|
||||
"""Test _get_keyword_search_content with mocked ThreadsClient."""
|
||||
|
||||
def test_returns_content_on_success(self, mock_config):
|
||||
from threads.threads_client import _get_keyword_search_content
|
||||
|
||||
mock_config["threads"]["thread"]["search_query"] = "test keyword"
|
||||
|
||||
mock_results = [
|
||||
{
|
||||
"id": "123",
|
||||
"text": "A keyword search result about test keyword",
|
||||
"username": "search_user",
|
||||
"permalink": "https://www.threads.net/@search_user/post/KWS",
|
||||
"shortcode": "KWS",
|
||||
"is_reply": False,
|
||||
}
|
||||
]
|
||||
|
||||
with patch("threads.threads_client.ThreadsClient") as MockClient, \
|
||||
patch("threads.threads_client.is_title_used", return_value=False):
|
||||
instance = MockClient.return_value
|
||||
instance.keyword_search.return_value = mock_results
|
||||
instance.get_conversation.return_value = []
|
||||
|
||||
content = _get_keyword_search_content(500, 1)
|
||||
|
||||
assert content is not None
|
||||
assert "test keyword" in content["thread_title"]
|
||||
|
||||
def test_returns_none_when_no_search_query(self, mock_config):
|
||||
from threads.threads_client import _get_keyword_search_content
|
||||
|
||||
mock_config["threads"]["thread"]["search_query"] = ""
|
||||
result = _get_keyword_search_content(500, 1)
|
||||
assert result is None
|
||||
@ -0,0 +1,48 @@
|
||||
"""
|
||||
Unit tests for utils/id.py — Thread/post ID extraction.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
|
||||
from utils.id import extract_id
|
||||
|
||||
|
||||
class TestExtractId:
|
||||
def test_extracts_thread_id(self):
|
||||
obj = {"thread_id": "ABC123"}
|
||||
assert extract_id(obj) == "ABC123"
|
||||
|
||||
def test_extracts_custom_field(self):
|
||||
obj = {"custom_field": "XYZ789"}
|
||||
assert extract_id(obj, field="custom_field") == "XYZ789"
|
||||
|
||||
def test_strips_special_characters(self):
|
||||
obj = {"thread_id": "abc!@#$%^&*()123"}
|
||||
result = extract_id(obj)
|
||||
assert "!" not in result
|
||||
assert "@" not in result
|
||||
assert "#" not in result
|
||||
assert "$" not in result
|
||||
# Alphanumeric and hyphens/underscores/whitespace should remain
|
||||
assert "abc" in result
|
||||
assert "123" in result
|
||||
|
||||
def test_raises_for_missing_field(self):
|
||||
obj = {"other_field": "value"}
|
||||
with pytest.raises(ValueError, match="Field 'thread_id' not found"):
|
||||
extract_id(obj)
|
||||
|
||||
def test_handles_empty_string_id(self):
|
||||
obj = {"thread_id": ""}
|
||||
result = extract_id(obj)
|
||||
assert result == ""
|
||||
|
||||
def test_preserves_hyphens_and_underscores(self):
|
||||
obj = {"thread_id": "test-thread_123"}
|
||||
result = extract_id(obj)
|
||||
assert result == "test-thread_123"
|
||||
|
||||
def test_preserves_whitespace(self):
|
||||
obj = {"thread_id": "test thread 123"}
|
||||
result = extract_id(obj)
|
||||
assert "test thread 123" == result
|
||||
@ -0,0 +1,121 @@
|
||||
"""
|
||||
Integration tests for the scheduler pipeline flow.
|
||||
|
||||
Tests run_pipeline() and run_scheduled() with all external
|
||||
dependencies (API calls, TTS, video generation) mocked.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
# Pre-mock playwright and other heavy deps needed by transitive imports
|
||||
_playwright_mock = MagicMock()
|
||||
_playwright_mock.sync_api.sync_playwright = MagicMock
|
||||
_playwright_mock.sync_api.TimeoutError = TimeoutError
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _mock_heavy_deps(monkeypatch):
|
||||
"""Mock heavy dependencies not needed for pipeline tests."""
|
||||
monkeypatch.setitem(sys.modules, "playwright", _playwright_mock)
|
||||
monkeypatch.setitem(sys.modules, "playwright.sync_api", _playwright_mock.sync_api)
|
||||
|
||||
# Mock video_creation submodules that may have heavy deps (moviepy, selenium, etc.)
|
||||
for mod_name in [
|
||||
"video_creation.voices",
|
||||
"video_creation.threads_screenshot",
|
||||
"video_creation.final_video",
|
||||
"video_creation.background",
|
||||
]:
|
||||
if mod_name not in sys.modules:
|
||||
monkeypatch.setitem(sys.modules, mod_name, MagicMock())
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# run_pipeline integration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestRunPipeline:
|
||||
"""Test the full pipeline flow with mocked internals."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
pass
|
||||
|
||||
def test_pipeline_calls_steps_in_order(self, mock_config, tmp_path):
|
||||
"""Verify pipeline calls all steps and returns successfully."""
|
||||
call_order = []
|
||||
|
||||
mock_thread_object = {
|
||||
"thread_url": "https://threads.net/test",
|
||||
"thread_title": "Test Thread",
|
||||
"thread_id": "test_123",
|
||||
"thread_author": "@test",
|
||||
"is_nsfw": False,
|
||||
"thread_post": "Content",
|
||||
"comments": [
|
||||
{"comment_body": "Reply", "comment_url": "", "comment_id": "r1", "comment_author": "@r"},
|
||||
],
|
||||
}
|
||||
|
||||
# Imports are local inside run_pipeline, so we must mock the source modules
|
||||
with patch("threads.threads_client.get_threads_posts", return_value=mock_thread_object) as mock_get_posts, \
|
||||
patch("utils.check_token.preflight_check") as mock_preflight, \
|
||||
patch("video_creation.voices.save_text_to_mp3", return_value=(30.5, 1)) as mock_tts, \
|
||||
patch("video_creation.threads_screenshot.get_screenshots_of_threads_posts") as mock_screenshots, \
|
||||
patch("video_creation.background.get_background_config", return_value={"video": "mc", "audio": "lofi"}), \
|
||||
patch("video_creation.background.download_background_video"), \
|
||||
patch("video_creation.background.download_background_audio"), \
|
||||
patch("video_creation.background.chop_background"), \
|
||||
patch("video_creation.final_video.make_final_video") as mock_final, \
|
||||
patch("scheduler.pipeline.save_title"), \
|
||||
patch("os.path.exists", return_value=False):
|
||||
from scheduler.pipeline import run_pipeline
|
||||
result = run_pipeline()
|
||||
|
||||
mock_preflight.assert_called_once()
|
||||
mock_get_posts.assert_called_once()
|
||||
mock_tts.assert_called_once()
|
||||
mock_screenshots.assert_called_once()
|
||||
mock_final.assert_called_once()
|
||||
|
||||
def test_pipeline_handles_error(self, mock_config):
|
||||
"""Pipeline should propagate exceptions from steps."""
|
||||
|
||||
with patch("utils.check_token.preflight_check"), \
|
||||
patch("threads.threads_client.get_threads_posts", side_effect=Exception("API error")), \
|
||||
patch("video_creation.voices.save_text_to_mp3", return_value=(0, 0)), \
|
||||
patch("video_creation.threads_screenshot.get_screenshots_of_threads_posts"), \
|
||||
patch("video_creation.background.get_background_config", return_value={}), \
|
||||
patch("video_creation.background.download_background_video"), \
|
||||
patch("video_creation.background.download_background_audio"), \
|
||||
patch("video_creation.background.chop_background"), \
|
||||
patch("video_creation.final_video.make_final_video"):
|
||||
from scheduler.pipeline import run_pipeline
|
||||
with pytest.raises(Exception, match="API error"):
|
||||
run_pipeline()
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# run_scheduled — scheduler configuration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestRunScheduled:
|
||||
def test_scheduler_not_enabled(self, mock_config, capsys):
|
||||
from scheduler.pipeline import run_scheduled
|
||||
|
||||
mock_config["scheduler"]["enabled"] = False
|
||||
run_scheduled()
|
||||
# Should not crash, just print warning
|
||||
|
||||
def test_scheduler_invalid_cron(self, mock_config, capsys):
|
||||
from scheduler.pipeline import run_scheduled
|
||||
|
||||
mock_config["scheduler"]["enabled"] = True
|
||||
mock_config["scheduler"]["cron"] = "invalid"
|
||||
run_scheduled()
|
||||
# Should not crash, just print error about invalid cron
|
||||
@ -0,0 +1,151 @@
|
||||
"""
|
||||
Unit tests for utils/settings.py — Safe type casting and config validation.
|
||||
"""
|
||||
|
||||
import pytest
|
||||
|
||||
# Import after conftest sets up sys.path
|
||||
from utils.settings import _safe_type_cast, check, crawl, crawl_and_check
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# _safe_type_cast
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestSafeTypeCast:
|
||||
"""Tests for _safe_type_cast — replacement for eval() calls."""
|
||||
|
||||
def test_cast_int(self):
|
||||
assert _safe_type_cast("int", "42") == 42
|
||||
assert _safe_type_cast("int", 42) == 42
|
||||
|
||||
def test_cast_float(self):
|
||||
assert _safe_type_cast("float", "3.14") == pytest.approx(3.14)
|
||||
assert _safe_type_cast("float", 3) == pytest.approx(3.0)
|
||||
|
||||
def test_cast_str(self):
|
||||
assert _safe_type_cast("str", 123) == "123"
|
||||
assert _safe_type_cast("str", "hello") == "hello"
|
||||
|
||||
def test_cast_bool_true_variants(self):
|
||||
assert _safe_type_cast("bool", "true") is True
|
||||
assert _safe_type_cast("bool", "True") is True
|
||||
assert _safe_type_cast("bool", "1") is True
|
||||
assert _safe_type_cast("bool", "yes") is True
|
||||
assert _safe_type_cast("bool", 1) is True
|
||||
|
||||
def test_cast_bool_false_variants(self):
|
||||
assert _safe_type_cast("bool", "false") is False
|
||||
assert _safe_type_cast("bool", "0") is False
|
||||
assert _safe_type_cast("bool", "no") is False
|
||||
assert _safe_type_cast("bool", 0) is False
|
||||
|
||||
def test_cast_false_literal(self):
|
||||
"""The special key "False" always returns False."""
|
||||
assert _safe_type_cast("False", "anything") is False
|
||||
assert _safe_type_cast("False", True) is False
|
||||
|
||||
def test_unknown_type_raises(self):
|
||||
with pytest.raises(ValueError, match="Unknown type"):
|
||||
_safe_type_cast("list", "[1, 2]")
|
||||
|
||||
def test_invalid_int_raises(self):
|
||||
with pytest.raises(ValueError):
|
||||
_safe_type_cast("int", "not_a_number")
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# crawl
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestCrawl:
|
||||
"""Tests for crawl — recursive dictionary walking."""
|
||||
|
||||
def test_flat_dict(self):
|
||||
collected = []
|
||||
crawl({"a": 1, "b": 2}, func=lambda path, val: collected.append((path, val)))
|
||||
assert (["a"], 1) in collected
|
||||
assert (["b"], 2) in collected
|
||||
|
||||
def test_nested_dict(self):
|
||||
collected = []
|
||||
crawl(
|
||||
{"section": {"key1": "v1", "key2": "v2"}},
|
||||
func=lambda path, val: collected.append((path, val)),
|
||||
)
|
||||
assert (["section", "key1"], "v1") in collected
|
||||
assert (["section", "key2"], "v2") in collected
|
||||
|
||||
def test_empty_dict(self):
|
||||
collected = []
|
||||
crawl({}, func=lambda path, val: collected.append((path, val)))
|
||||
assert collected == []
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# check (with mocked handle_input to avoid interactive prompt)
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestCheck:
|
||||
"""Tests for the check function — value validation against checks dict."""
|
||||
|
||||
def test_valid_value_passes(self):
|
||||
result = check(42, {"type": "int", "nmin": 0, "nmax": 100}, "test_var")
|
||||
assert result == 42
|
||||
|
||||
def test_valid_string_passes(self):
|
||||
result = check("hello", {"type": "str"}, "test_var")
|
||||
assert result == "hello"
|
||||
|
||||
def test_valid_options(self):
|
||||
result = check("dark", {"type": "str", "options": ["dark", "light"]}, "theme")
|
||||
assert result == "dark"
|
||||
|
||||
def test_valid_regex(self):
|
||||
result = check("vi", {"type": "str", "regex": r"^[a-z]{2}$"}, "lang")
|
||||
assert result == "vi"
|
||||
|
||||
def test_valid_range_min(self):
|
||||
result = check(5, {"type": "int", "nmin": 1, "nmax": 10}, "count")
|
||||
assert result == 5
|
||||
|
||||
def test_boundary_nmin(self):
|
||||
result = check(1, {"type": "int", "nmin": 1, "nmax": 10}, "count")
|
||||
assert result == 1
|
||||
|
||||
def test_boundary_nmax(self):
|
||||
result = check(10, {"type": "int", "nmin": 1, "nmax": 10}, "count")
|
||||
assert result == 10
|
||||
|
||||
def test_string_length_check(self):
|
||||
"""Iterable values check len() against nmin/nmax."""
|
||||
result = check("hello", {"type": "str", "nmin": 1, "nmax": 20}, "text")
|
||||
assert result == "hello"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# crawl_and_check
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestCrawlAndCheck:
|
||||
"""Tests for crawl_and_check — recursive config validation."""
|
||||
|
||||
def test_creates_missing_path(self):
|
||||
obj = {"section": {"key": "existing"}}
|
||||
result = crawl_and_check(obj, ["section", "key"], {"type": "str"}, "test")
|
||||
assert "section" in result
|
||||
assert result["section"]["key"] == "existing"
|
||||
|
||||
def test_preserves_existing_value(self):
|
||||
obj = {"section": {"key": "existing"}}
|
||||
result = crawl_and_check(obj, ["section", "key"], {"type": "str"}, "test")
|
||||
assert result["section"]["key"] == "existing"
|
||||
|
||||
def test_validates_nested_int(self):
|
||||
obj = {"settings": {"count": 5}}
|
||||
result = crawl_and_check(obj, ["settings", "count"], {"type": "int", "nmin": 1, "nmax": 10}, "count")
|
||||
assert result["settings"]["count"] == 5
|
||||
@ -0,0 +1,284 @@
|
||||
"""
|
||||
Integration tests for Threads API external calls — mocked HTTP layer.
|
||||
|
||||
Tests the full request flow through ThreadsClient including URL construction,
|
||||
parameter passing, pagination, and error handling.
|
||||
"""
|
||||
|
||||
import json
|
||||
from unittest.mock import MagicMock, call, patch
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
from tests.conftest import MOCK_CONFIG
|
||||
|
||||
|
||||
def _fake_response(status_code=200, json_data=None, headers=None):
|
||||
"""Build a realistic requests.Response mock."""
|
||||
resp = MagicMock(spec=requests.Response)
|
||||
resp.status_code = status_code
|
||||
resp.json.return_value = json_data or {}
|
||||
resp.headers = headers or {}
|
||||
if status_code < 400:
|
||||
resp.raise_for_status = MagicMock()
|
||||
else:
|
||||
resp.raise_for_status = MagicMock(
|
||||
side_effect=requests.HTTPError(f"{status_code}", response=resp)
|
||||
)
|
||||
return resp
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Full request flow — GET endpoints
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestThreadsAPIIntegrationGet:
|
||||
"""Integration tests verifying URL construction and parameter passing."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_get_user_profile_calls_correct_endpoint(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _fake_response(
|
||||
200, {"id": "123", "username": "user"}
|
||||
)
|
||||
self.client.get_user_profile()
|
||||
call_url = mock_get.call_args[0][0]
|
||||
assert "/me" in call_url
|
||||
params = mock_get.call_args[1]["params"]
|
||||
assert "fields" in params
|
||||
assert "id" in params["fields"]
|
||||
|
||||
def test_get_user_threads_calls_correct_endpoint(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _fake_response(
|
||||
200, {"data": [{"id": "1"}], "paging": {}}
|
||||
)
|
||||
self.client.get_user_threads(limit=5)
|
||||
call_url = mock_get.call_args[0][0]
|
||||
assert "/threads" in call_url
|
||||
|
||||
def test_get_thread_replies_includes_reverse_param(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _fake_response(
|
||||
200, {"data": [], "paging": {}}
|
||||
)
|
||||
self.client.get_thread_replies("t1", reverse=True)
|
||||
params = mock_get.call_args[1]["params"]
|
||||
assert params.get("reverse") == "true"
|
||||
|
||||
def test_get_conversation_calls_conversation_endpoint(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _fake_response(
|
||||
200, {"data": [], "paging": {}}
|
||||
)
|
||||
self.client.get_conversation("t1")
|
||||
call_url = mock_get.call_args[0][0]
|
||||
assert "/conversation" in call_url
|
||||
|
||||
def test_get_thread_insights_calls_insights_endpoint(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _fake_response(
|
||||
200, {"data": [{"name": "views", "values": [{"value": 100}]}]}
|
||||
)
|
||||
self.client.get_thread_insights("t1")
|
||||
call_url = mock_get.call_args[0][0]
|
||||
assert "/insights" in call_url
|
||||
|
||||
def test_get_publishing_limit_calls_correct_endpoint(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _fake_response(
|
||||
200, {"data": [{"quota_usage": 10, "config": {"quota_total": 250}}]}
|
||||
)
|
||||
self.client.get_publishing_limit()
|
||||
call_url = mock_get.call_args[0][0]
|
||||
assert "/threads_publishing_limit" in call_url
|
||||
|
||||
def test_keyword_search_calls_correct_endpoint(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _fake_response(200, {"data": []})
|
||||
self.client.keyword_search("test query")
|
||||
call_url = mock_get.call_args[0][0]
|
||||
assert "/threads_keyword_search" in call_url
|
||||
params = mock_get.call_args[1]["params"]
|
||||
assert params["q"] == "test query"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Full request flow — POST endpoints
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestThreadsAPIIntegrationPost:
|
||||
"""Integration tests verifying POST request construction."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_create_container_sends_post(self):
|
||||
with patch.object(self.client.session, "post") as mock_post:
|
||||
mock_post.return_value = _fake_response(200, {"id": "c1"})
|
||||
self.client.create_container(text="Hello")
|
||||
call_url = mock_post.call_args[0][0]
|
||||
assert "/threads" in call_url
|
||||
data = mock_post.call_args[1]["data"]
|
||||
assert data["text"] == "Hello"
|
||||
assert data["media_type"] == "TEXT"
|
||||
|
||||
def test_publish_thread_sends_creation_id(self):
|
||||
with patch.object(self.client.session, "post") as mock_post:
|
||||
mock_post.return_value = _fake_response(200, {"id": "pub_1"})
|
||||
self.client.publish_thread("c1")
|
||||
data = mock_post.call_args[1]["data"]
|
||||
assert data["creation_id"] == "c1"
|
||||
|
||||
def test_manage_reply_sends_hide_true(self):
|
||||
with patch.object(self.client.session, "post") as mock_post:
|
||||
mock_post.return_value = _fake_response(200, {"success": True})
|
||||
self.client.manage_reply("r1", hide=True)
|
||||
call_url = mock_post.call_args[0][0]
|
||||
assert "/manage_reply" in call_url
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# create_and_publish flow
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestCreateAndPublishFlow:
|
||||
"""Integration test for the full create → poll → publish flow."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_text_post_flow(self):
|
||||
with patch.object(self.client, "create_container") as mock_create, \
|
||||
patch.object(self.client, "publish_thread") as mock_publish:
|
||||
mock_create.return_value = "c1"
|
||||
mock_publish.return_value = "pub_1"
|
||||
result = self.client.create_and_publish(text="Hello world")
|
||||
assert result == "pub_1"
|
||||
mock_create.assert_called_once()
|
||||
mock_publish.assert_called_once_with("c1")
|
||||
|
||||
def test_image_post_polls_status(self):
|
||||
with patch.object(self.client, "create_container") as mock_create, \
|
||||
patch.object(self.client, "get_container_status") as mock_status, \
|
||||
patch.object(self.client, "publish_thread") as mock_publish, \
|
||||
patch("threads.threads_client._time.sleep"):
|
||||
mock_create.return_value = "c1"
|
||||
mock_status.side_effect = [
|
||||
{"status": "IN_PROGRESS"},
|
||||
{"status": "FINISHED"},
|
||||
]
|
||||
mock_publish.return_value = "pub_2"
|
||||
result = self.client.create_and_publish(
|
||||
text="Photo", image_url="https://example.com/img.jpg"
|
||||
)
|
||||
assert result == "pub_2"
|
||||
assert mock_status.call_count == 2
|
||||
|
||||
def test_container_error_raises(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
with patch.object(self.client, "create_container") as mock_create, \
|
||||
patch.object(self.client, "get_container_status") as mock_status, \
|
||||
patch("threads.threads_client._time.sleep"):
|
||||
mock_create.return_value = "c1"
|
||||
mock_status.return_value = {
|
||||
"status": "ERROR",
|
||||
"error_message": "Invalid image format",
|
||||
}
|
||||
with pytest.raises(ThreadsAPIError, match="lỗi"):
|
||||
self.client.create_and_publish(
|
||||
image_url="https://example.com/bad.jpg"
|
||||
)
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Token refresh integration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestTokenRefreshIntegration:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_refresh_updates_config(self, mock_config):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _fake_response(
|
||||
200, {"access_token": "refreshed_token", "expires_in": 5184000}
|
||||
)
|
||||
new_token = self.client.refresh_token()
|
||||
assert new_token == "refreshed_token"
|
||||
assert mock_config["threads"]["creds"]["access_token"] == "refreshed_token"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Pagination integration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestPaginationIntegration:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_paginated_uses_cursor(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.side_effect = [
|
||||
_fake_response(200, {
|
||||
"data": [{"id": str(i)} for i in range(3)],
|
||||
"paging": {"cursors": {"after": "cursor_abc"}, "next": "next_url"},
|
||||
}),
|
||||
_fake_response(200, {
|
||||
"data": [{"id": str(i)} for i in range(3, 5)],
|
||||
"paging": {},
|
||||
}),
|
||||
]
|
||||
result = self.client._get_paginated("user/threads", max_items=10)
|
||||
assert len(result) == 5
|
||||
# Second call should include the cursor
|
||||
second_call_params = mock_get.call_args_list[1][1]["params"]
|
||||
assert second_call_params.get("after") == "cursor_abc"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Error handling integration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestErrorHandlingIntegration:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_timeout_retries(self):
|
||||
with patch.object(self.client.session, "get") as mock_get, \
|
||||
patch("threads.threads_client._time.sleep"):
|
||||
mock_get.side_effect = [
|
||||
requests.Timeout("Request timed out"),
|
||||
_fake_response(200, {"id": "ok"}),
|
||||
]
|
||||
result = self.client._get("me")
|
||||
assert result == {"id": "ok"}
|
||||
assert mock_get.call_count == 2
|
||||
@ -0,0 +1,679 @@
|
||||
"""
|
||||
Unit tests for Threads API Client (threads/threads_client.py).
|
||||
|
||||
All HTTP calls are mocked — no real API requests are made.
|
||||
"""
|
||||
|
||||
import copy
|
||||
from unittest.mock import MagicMock, patch, PropertyMock
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
from tests.conftest import MOCK_CONFIG
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Helper: Build a mock HTTP response
|
||||
# ===================================================================
|
||||
|
||||
|
||||
def _mock_response(status_code=200, json_data=None, headers=None):
|
||||
"""Create a mock requests.Response."""
|
||||
resp = MagicMock(spec=requests.Response)
|
||||
resp.status_code = status_code
|
||||
resp.json.return_value = json_data or {}
|
||||
resp.headers = headers or {}
|
||||
resp.raise_for_status = MagicMock()
|
||||
if status_code >= 400:
|
||||
resp.raise_for_status.side_effect = requests.HTTPError(
|
||||
f"HTTP {status_code}", response=resp
|
||||
)
|
||||
return resp
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# ThreadsAPIError
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestThreadsAPIError:
|
||||
def test_basic_creation(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
err = ThreadsAPIError("test error", error_type="OAuthException", error_code=401)
|
||||
assert str(err) == "test error"
|
||||
assert err.error_type == "OAuthException"
|
||||
assert err.error_code == 401
|
||||
|
||||
def test_defaults(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
err = ThreadsAPIError("simple error")
|
||||
assert err.error_type == ""
|
||||
assert err.error_code == 0
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# ThreadsClient._handle_api_response
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestHandleApiResponse:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_success_response(self):
|
||||
resp = _mock_response(200, {"data": [{"id": "1"}]})
|
||||
result = self.client._handle_api_response(resp)
|
||||
assert result == {"data": [{"id": "1"}]}
|
||||
|
||||
def test_401_raises_api_error(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
resp = _mock_response(401)
|
||||
with pytest.raises(ThreadsAPIError, match="401"):
|
||||
self.client._handle_api_response(resp)
|
||||
|
||||
def test_403_raises_api_error(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
resp = _mock_response(403)
|
||||
with pytest.raises(ThreadsAPIError, match="403"):
|
||||
self.client._handle_api_response(resp)
|
||||
|
||||
def test_200_with_error_body(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
resp = _mock_response(
|
||||
200,
|
||||
{"error": {"message": "Invalid token", "type": "OAuthException", "code": 190}},
|
||||
)
|
||||
with pytest.raises(ThreadsAPIError, match="Invalid token"):
|
||||
self.client._handle_api_response(resp)
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# ThreadsClient._get and _post with retry logic
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetWithRetry:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_successful_get(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _mock_response(200, {"id": "123"})
|
||||
result = self.client._get("me", params={"fields": "id"})
|
||||
assert result == {"id": "123"}
|
||||
mock_get.assert_called_once()
|
||||
|
||||
def test_retries_on_connection_error(self):
|
||||
with patch.object(self.client.session, "get") as mock_get, \
|
||||
patch("threads.threads_client._time.sleep"):
|
||||
# Fail twice, succeed on third
|
||||
mock_get.side_effect = [
|
||||
requests.ConnectionError("Connection failed"),
|
||||
requests.ConnectionError("Connection failed"),
|
||||
_mock_response(200, {"id": "123"}),
|
||||
]
|
||||
result = self.client._get("me")
|
||||
assert result == {"id": "123"}
|
||||
assert mock_get.call_count == 3
|
||||
|
||||
def test_raises_after_max_retries(self):
|
||||
with patch.object(self.client.session, "get") as mock_get, \
|
||||
patch("threads.threads_client._time.sleep"):
|
||||
mock_get.side_effect = requests.ConnectionError("Connection failed")
|
||||
with pytest.raises(requests.ConnectionError):
|
||||
self.client._get("me")
|
||||
assert mock_get.call_count == 3 # _MAX_RETRIES
|
||||
|
||||
def test_does_not_retry_api_error(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _mock_response(
|
||||
200, {"error": {"message": "Bad request", "type": "APIError", "code": 100}}
|
||||
)
|
||||
with pytest.raises(ThreadsAPIError):
|
||||
self.client._get("me")
|
||||
assert mock_get.call_count == 1 # No retries for API errors
|
||||
|
||||
|
||||
class TestPostWithRetry:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_successful_post(self):
|
||||
with patch.object(self.client.session, "post") as mock_post:
|
||||
mock_post.return_value = _mock_response(200, {"id": "container_123"})
|
||||
result = self.client._post("user/threads", data={"text": "Hello"})
|
||||
assert result == {"id": "container_123"}
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# ThreadsClient._get_paginated
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetPaginated:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_single_page(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {
|
||||
"data": [{"id": "1"}, {"id": "2"}],
|
||||
"paging": {},
|
||||
}
|
||||
result = self.client._get_paginated("user/threads", max_items=10)
|
||||
assert len(result) == 2
|
||||
|
||||
def test_multi_page(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.side_effect = [
|
||||
{
|
||||
"data": [{"id": "1"}, {"id": "2"}],
|
||||
"paging": {"cursors": {"after": "cursor1"}, "next": "url"},
|
||||
},
|
||||
{
|
||||
"data": [{"id": "3"}],
|
||||
"paging": {},
|
||||
},
|
||||
]
|
||||
result = self.client._get_paginated("user/threads", max_items=10)
|
||||
assert len(result) == 3
|
||||
|
||||
def test_respects_max_items(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {
|
||||
"data": [{"id": str(i)} for i in range(50)],
|
||||
"paging": {"cursors": {"after": "c"}, "next": "url"},
|
||||
}
|
||||
result = self.client._get_paginated("user/threads", max_items=5)
|
||||
assert len(result) == 5
|
||||
|
||||
def test_empty_data(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {"data": [], "paging": {}}
|
||||
result = self.client._get_paginated("user/threads", max_items=10)
|
||||
assert result == []
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Token Management
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestValidateToken:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_validate_success(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {
|
||||
"id": "123456789",
|
||||
"username": "testuser",
|
||||
"name": "Test User",
|
||||
}
|
||||
result = self.client.validate_token()
|
||||
assert result["username"] == "testuser"
|
||||
|
||||
def test_validate_fails_with_bad_token(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.side_effect = ThreadsAPIError("Token expired")
|
||||
with pytest.raises(ThreadsAPIError, match="token"):
|
||||
self.client.validate_token()
|
||||
|
||||
|
||||
class TestRefreshToken:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_refresh_success(self):
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _mock_response(
|
||||
200, {"access_token": "new_token_123", "token_type": "bearer", "expires_in": 5184000}
|
||||
)
|
||||
new_token = self.client.refresh_token()
|
||||
assert new_token == "new_token_123"
|
||||
assert self.client.access_token == "new_token_123"
|
||||
|
||||
def test_refresh_failure_error_body(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
with patch.object(self.client.session, "get") as mock_get:
|
||||
mock_get.return_value = _mock_response(
|
||||
200, {"error": {"message": "Token cannot be refreshed"}}
|
||||
)
|
||||
with pytest.raises(ThreadsAPIError, match="refresh"):
|
||||
self.client.refresh_token()
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Profiles API
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetUserProfile:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_get_own_profile(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {"id": "123", "username": "testuser"}
|
||||
result = self.client.get_user_profile()
|
||||
mock_get.assert_called_once()
|
||||
assert result["username"] == "testuser"
|
||||
|
||||
def test_get_specific_user_profile(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {"id": "456", "username": "other_user"}
|
||||
result = self.client.get_user_profile(user_id="456")
|
||||
assert result["id"] == "456"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Media API
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetUserThreads:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_get_user_threads(self):
|
||||
with patch.object(self.client, "_get_paginated") as mock_paginated:
|
||||
mock_paginated.return_value = [{"id": "1", "text": "Hello"}]
|
||||
result = self.client.get_user_threads(limit=10)
|
||||
assert len(result) == 1
|
||||
assert result[0]["text"] == "Hello"
|
||||
|
||||
|
||||
class TestGetThreadById:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_get_thread_details(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {"id": "thread_1", "text": "Thread content", "has_replies": True}
|
||||
result = self.client.get_thread_by_id("thread_1")
|
||||
assert result["text"] == "Thread content"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Reply Management
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetThreadReplies:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_get_replies(self):
|
||||
with patch.object(self.client, "_get_paginated") as mock_paginated:
|
||||
mock_paginated.return_value = [
|
||||
{"id": "r1", "text": "Reply 1"},
|
||||
{"id": "r2", "text": "Reply 2"},
|
||||
]
|
||||
result = self.client.get_thread_replies("thread_1")
|
||||
assert len(result) == 2
|
||||
|
||||
|
||||
class TestGetConversation:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_get_full_conversation(self):
|
||||
with patch.object(self.client, "_get_paginated") as mock_paginated:
|
||||
mock_paginated.return_value = [
|
||||
{"id": "r1", "text": "Reply 1"},
|
||||
{"id": "r2", "text": "Nested reply"},
|
||||
]
|
||||
result = self.client.get_conversation("thread_1")
|
||||
assert len(result) == 2
|
||||
|
||||
|
||||
class TestManageReply:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_hide_reply(self):
|
||||
with patch.object(self.client, "_post") as mock_post:
|
||||
mock_post.return_value = {"success": True}
|
||||
result = self.client.manage_reply("reply_1", hide=True)
|
||||
assert result["success"] is True
|
||||
mock_post.assert_called_once_with(
|
||||
"reply_1/manage_reply", data={"hide": "true"}
|
||||
)
|
||||
|
||||
def test_unhide_reply(self):
|
||||
with patch.object(self.client, "_post") as mock_post:
|
||||
mock_post.return_value = {"success": True}
|
||||
self.client.manage_reply("reply_1", hide=False)
|
||||
mock_post.assert_called_once_with(
|
||||
"reply_1/manage_reply", data={"hide": "false"}
|
||||
)
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Publishing API
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestCreateContainer:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_create_text_container(self):
|
||||
with patch.object(self.client, "_post") as mock_post:
|
||||
mock_post.return_value = {"id": "container_123"}
|
||||
cid = self.client.create_container(text="Hello world")
|
||||
assert cid == "container_123"
|
||||
|
||||
def test_create_image_container(self):
|
||||
with patch.object(self.client, "_post") as mock_post:
|
||||
mock_post.return_value = {"id": "container_456"}
|
||||
cid = self.client.create_container(
|
||||
media_type="IMAGE",
|
||||
text="Photo caption",
|
||||
image_url="https://example.com/image.jpg",
|
||||
)
|
||||
assert cid == "container_456"
|
||||
|
||||
def test_raises_when_no_id_returned(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
with patch.object(self.client, "_post") as mock_post:
|
||||
mock_post.return_value = {}
|
||||
with pytest.raises(ThreadsAPIError, match="container ID"):
|
||||
self.client.create_container(text="Test")
|
||||
|
||||
|
||||
class TestPublishThread:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_publish_success(self):
|
||||
with patch.object(self.client, "_post") as mock_post:
|
||||
mock_post.return_value = {"id": "published_thread_1"}
|
||||
media_id = self.client.publish_thread("container_123")
|
||||
assert media_id == "published_thread_1"
|
||||
|
||||
def test_publish_no_id(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
with patch.object(self.client, "_post") as mock_post:
|
||||
mock_post.return_value = {}
|
||||
with pytest.raises(ThreadsAPIError, match="media ID"):
|
||||
self.client.publish_thread("container_123")
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Insights API
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetThreadInsights:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_get_insights(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {
|
||||
"data": [
|
||||
{"name": "views", "values": [{"value": 1000}]},
|
||||
{"name": "likes", "values": [{"value": 50}]},
|
||||
]
|
||||
}
|
||||
result = self.client.get_thread_insights("thread_1")
|
||||
assert len(result) == 2
|
||||
|
||||
|
||||
class TestGetThreadEngagement:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_engagement_dict(self):
|
||||
with patch.object(self.client, "get_thread_insights") as mock_insights:
|
||||
mock_insights.return_value = [
|
||||
{"name": "views", "values": [{"value": 1000}]},
|
||||
{"name": "likes", "values": [{"value": 50}]},
|
||||
{"name": "replies", "values": [{"value": 10}]},
|
||||
]
|
||||
engagement = self.client.get_thread_engagement("thread_1")
|
||||
assert engagement["views"] == 1000
|
||||
assert engagement["likes"] == 50
|
||||
assert engagement["replies"] == 10
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Rate Limiting
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestCanPublish:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_can_publish_when_quota_available(self):
|
||||
with patch.object(self.client, "get_publishing_limit") as mock_limit:
|
||||
mock_limit.return_value = {
|
||||
"quota_usage": 10,
|
||||
"config": {"quota_total": 250},
|
||||
}
|
||||
assert self.client.can_publish() is True
|
||||
|
||||
def test_cannot_publish_when_quota_exhausted(self):
|
||||
with patch.object(self.client, "get_publishing_limit") as mock_limit:
|
||||
mock_limit.return_value = {
|
||||
"quota_usage": 250,
|
||||
"config": {"quota_total": 250},
|
||||
}
|
||||
assert self.client.can_publish() is False
|
||||
|
||||
def test_optimistic_on_error(self):
|
||||
from threads.threads_client import ThreadsAPIError
|
||||
|
||||
with patch.object(self.client, "get_publishing_limit") as mock_limit:
|
||||
mock_limit.side_effect = ThreadsAPIError("Rate limit error")
|
||||
assert self.client.can_publish() is True
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Keyword Search API
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestKeywordSearch:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_basic_search(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {
|
||||
"data": [
|
||||
{"id": "1", "text": "Search result 1"},
|
||||
{"id": "2", "text": "Search result 2"},
|
||||
]
|
||||
}
|
||||
results = self.client.keyword_search("test query")
|
||||
assert len(results) == 2
|
||||
|
||||
def test_empty_query_raises(self):
|
||||
with pytest.raises(ValueError, match="bắt buộc"):
|
||||
self.client.keyword_search("")
|
||||
|
||||
def test_whitespace_query_raises(self):
|
||||
with pytest.raises(ValueError, match="bắt buộc"):
|
||||
self.client.keyword_search(" ")
|
||||
|
||||
def test_invalid_search_type_raises(self):
|
||||
with pytest.raises(ValueError, match="search_type"):
|
||||
self.client.keyword_search("test", search_type="INVALID")
|
||||
|
||||
def test_invalid_search_mode_raises(self):
|
||||
with pytest.raises(ValueError, match="search_mode"):
|
||||
self.client.keyword_search("test", search_mode="INVALID")
|
||||
|
||||
def test_invalid_media_type_raises(self):
|
||||
with pytest.raises(ValueError, match="media_type"):
|
||||
self.client.keyword_search("test", media_type="INVALID")
|
||||
|
||||
def test_invalid_limit_raises(self):
|
||||
with pytest.raises(ValueError, match="limit"):
|
||||
self.client.keyword_search("test", limit=0)
|
||||
with pytest.raises(ValueError, match="limit"):
|
||||
self.client.keyword_search("test", limit=101)
|
||||
|
||||
def test_strips_at_from_username(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {"data": []}
|
||||
self.client.keyword_search("test", author_username="@testuser")
|
||||
call_params = mock_get.call_args[1]["params"]
|
||||
assert call_params["author_username"] == "testuser"
|
||||
|
||||
def test_search_with_all_params(self):
|
||||
with patch.object(self.client, "_get") as mock_get:
|
||||
mock_get.return_value = {"data": [{"id": "1"}]}
|
||||
results = self.client.keyword_search(
|
||||
q="trending",
|
||||
search_type="RECENT",
|
||||
search_mode="TAG",
|
||||
media_type="TEXT",
|
||||
since="1700000000",
|
||||
until="1700100000",
|
||||
limit=50,
|
||||
author_username="user",
|
||||
)
|
||||
assert len(results) == 1
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Client-side keyword filter
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestSearchThreadsByKeyword:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
from threads.threads_client import ThreadsClient
|
||||
|
||||
self.client = ThreadsClient()
|
||||
|
||||
def test_filters_by_keyword(self):
|
||||
threads = [
|
||||
{"id": "1", "text": "Python is great for AI"},
|
||||
{"id": "2", "text": "JavaScript frameworks"},
|
||||
{"id": "3", "text": "Learning Python basics"},
|
||||
]
|
||||
result = self.client.search_threads_by_keyword(threads, ["python"])
|
||||
assert len(result) == 2
|
||||
|
||||
def test_case_insensitive_filter(self):
|
||||
threads = [{"id": "1", "text": "PYTHON Programming"}]
|
||||
result = self.client.search_threads_by_keyword(threads, ["python"])
|
||||
assert len(result) == 1
|
||||
|
||||
def test_no_match(self):
|
||||
threads = [{"id": "1", "text": "JavaScript only"}]
|
||||
result = self.client.search_threads_by_keyword(threads, ["python"])
|
||||
assert len(result) == 0
|
||||
|
||||
def test_multiple_keywords(self):
|
||||
threads = [
|
||||
{"id": "1", "text": "Python programming"},
|
||||
{"id": "2", "text": "Java development"},
|
||||
{"id": "3", "text": "Rust is fast"},
|
||||
]
|
||||
result = self.client.search_threads_by_keyword(threads, ["python", "rust"])
|
||||
assert len(result) == 2
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# _contains_blocked_words
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestContainsBlockedWords:
|
||||
def test_no_blocked_words(self, mock_config):
|
||||
from threads.threads_client import _contains_blocked_words
|
||||
|
||||
mock_config["threads"]["thread"]["blocked_words"] = ""
|
||||
assert _contains_blocked_words("any text here") is False
|
||||
|
||||
def test_detects_blocked_word(self, mock_config):
|
||||
from threads.threads_client import _contains_blocked_words
|
||||
|
||||
mock_config["threads"]["thread"]["blocked_words"] = "spam, scam, fake"
|
||||
assert _contains_blocked_words("This is spam content") is True
|
||||
|
||||
def test_case_insensitive(self, mock_config):
|
||||
from threads.threads_client import _contains_blocked_words
|
||||
|
||||
mock_config["threads"]["thread"]["blocked_words"] = "spam"
|
||||
assert _contains_blocked_words("SPAM HERE") is True
|
||||
|
||||
def test_no_match(self, mock_config):
|
||||
from threads.threads_client import _contains_blocked_words
|
||||
|
||||
mock_config["threads"]["thread"]["blocked_words"] = "spam, scam"
|
||||
assert _contains_blocked_words("Clean text") is False
|
||||
@ -0,0 +1,173 @@
|
||||
"""
|
||||
Unit tests for utils/title_history.py — Title deduplication system.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from utils.title_history import (
|
||||
TITLE_HISTORY_PATH,
|
||||
_ensure_file_exists,
|
||||
get_title_count,
|
||||
is_title_used,
|
||||
load_title_history,
|
||||
save_title,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def patched_history_path(tmp_path):
|
||||
"""Redirect title history to a temporary file."""
|
||||
history_file = str(tmp_path / "title_history.json")
|
||||
with patch("utils.title_history.TITLE_HISTORY_PATH", history_file):
|
||||
yield history_file
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# _ensure_file_exists
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestEnsureFileExists:
|
||||
def test_creates_file_when_missing(self, patched_history_path):
|
||||
assert not os.path.exists(patched_history_path)
|
||||
_ensure_file_exists()
|
||||
assert os.path.exists(patched_history_path)
|
||||
with open(patched_history_path, "r", encoding="utf-8") as f:
|
||||
assert json.load(f) == []
|
||||
|
||||
def test_no_op_when_file_exists(self, patched_history_path):
|
||||
# Pre-create with data
|
||||
os.makedirs(os.path.dirname(patched_history_path), exist_ok=True)
|
||||
with open(patched_history_path, "w", encoding="utf-8") as f:
|
||||
json.dump([{"title": "existing"}], f)
|
||||
_ensure_file_exists()
|
||||
with open(patched_history_path, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
assert len(data) == 1
|
||||
assert data[0]["title"] == "existing"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# load_title_history
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestLoadTitleHistory:
|
||||
def test_returns_empty_list_on_fresh_state(self, patched_history_path):
|
||||
result = load_title_history()
|
||||
assert result == []
|
||||
|
||||
def test_returns_saved_data(self, patched_history_path):
|
||||
os.makedirs(os.path.dirname(patched_history_path), exist_ok=True)
|
||||
entries = [{"title": "Test Title", "thread_id": "123", "source": "threads", "created_at": 1000}]
|
||||
with open(patched_history_path, "w", encoding="utf-8") as f:
|
||||
json.dump(entries, f)
|
||||
result = load_title_history()
|
||||
assert len(result) == 1
|
||||
assert result[0]["title"] == "Test Title"
|
||||
|
||||
def test_handles_corrupted_json(self, patched_history_path):
|
||||
os.makedirs(os.path.dirname(patched_history_path), exist_ok=True)
|
||||
with open(patched_history_path, "w") as f:
|
||||
f.write("not valid json!!!")
|
||||
result = load_title_history()
|
||||
assert result == []
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# is_title_used
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestIsTitleUsed:
|
||||
def test_returns_false_for_empty_title(self, patched_history_path):
|
||||
assert is_title_used("") is False
|
||||
assert is_title_used(" ") is False
|
||||
|
||||
def test_returns_false_when_history_empty(self, patched_history_path):
|
||||
assert is_title_used("New Title") is False
|
||||
|
||||
def test_returns_true_for_exact_match(self, patched_history_path):
|
||||
os.makedirs(os.path.dirname(patched_history_path), exist_ok=True)
|
||||
with open(patched_history_path, "w", encoding="utf-8") as f:
|
||||
json.dump([{"title": "Existing Title", "thread_id": "", "source": "threads", "created_at": 1000}], f)
|
||||
assert is_title_used("Existing Title") is True
|
||||
|
||||
def test_case_insensitive_match(self, patched_history_path):
|
||||
os.makedirs(os.path.dirname(patched_history_path), exist_ok=True)
|
||||
with open(patched_history_path, "w", encoding="utf-8") as f:
|
||||
json.dump([{"title": "Existing Title", "thread_id": "", "source": "threads", "created_at": 1000}], f)
|
||||
assert is_title_used("existing title") is True
|
||||
assert is_title_used("EXISTING TITLE") is True
|
||||
|
||||
def test_strips_whitespace(self, patched_history_path):
|
||||
os.makedirs(os.path.dirname(patched_history_path), exist_ok=True)
|
||||
with open(patched_history_path, "w", encoding="utf-8") as f:
|
||||
json.dump([{"title": "Existing Title", "thread_id": "", "source": "threads", "created_at": 1000}], f)
|
||||
assert is_title_used(" Existing Title ") is True
|
||||
|
||||
def test_returns_false_for_different_title(self, patched_history_path):
|
||||
os.makedirs(os.path.dirname(patched_history_path), exist_ok=True)
|
||||
with open(patched_history_path, "w", encoding="utf-8") as f:
|
||||
json.dump([{"title": "Existing Title", "thread_id": "", "source": "threads", "created_at": 1000}], f)
|
||||
assert is_title_used("Completely Different") is False
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# save_title
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestSaveTitle:
|
||||
def test_save_new_title(self, patched_history_path):
|
||||
save_title("New Video Title", thread_id="abc123", source="threads")
|
||||
with open(patched_history_path, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
assert len(data) == 1
|
||||
assert data[0]["title"] == "New Video Title"
|
||||
assert data[0]["thread_id"] == "abc123"
|
||||
assert data[0]["source"] == "threads"
|
||||
assert "created_at" in data[0]
|
||||
|
||||
def test_skip_empty_title(self, patched_history_path):
|
||||
save_title("", thread_id="abc")
|
||||
save_title(" ", thread_id="abc")
|
||||
# File should not be created or should remain empty
|
||||
if os.path.exists(patched_history_path):
|
||||
with open(patched_history_path, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
assert len(data) == 0
|
||||
|
||||
def test_skip_duplicate_title(self, patched_history_path):
|
||||
save_title("Unique Title", thread_id="1")
|
||||
save_title("Unique Title", thread_id="2") # duplicate
|
||||
with open(patched_history_path, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
assert len(data) == 1
|
||||
|
||||
def test_save_multiple_unique_titles(self, patched_history_path):
|
||||
save_title("Title One", thread_id="1")
|
||||
save_title("Title Two", thread_id="2")
|
||||
save_title("Title Three", thread_id="3")
|
||||
with open(patched_history_path, "r", encoding="utf-8") as f:
|
||||
data = json.load(f)
|
||||
assert len(data) == 3
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# get_title_count
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGetTitleCount:
|
||||
def test_zero_on_empty(self, patched_history_path):
|
||||
assert get_title_count() == 0
|
||||
|
||||
def test_correct_count(self, patched_history_path):
|
||||
save_title("A", thread_id="1")
|
||||
save_title("B", thread_id="2")
|
||||
assert get_title_count() == 2
|
||||
@ -0,0 +1,137 @@
|
||||
"""
|
||||
Unit tests for TTS modules — GTTS and TTSEngine.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
# Pre-mock heavy dependencies that may not be installed in test env
|
||||
@pytest.fixture(autouse=True)
|
||||
def _mock_tts_deps(monkeypatch):
|
||||
"""Mock heavy TTS dependencies."""
|
||||
# Mock gtts
|
||||
mock_gtts_module = MagicMock()
|
||||
mock_gtts_class = MagicMock()
|
||||
mock_gtts_module.gTTS = mock_gtts_class
|
||||
monkeypatch.setitem(sys.modules, "gtts", mock_gtts_module)
|
||||
|
||||
# Mock numpy
|
||||
monkeypatch.setitem(sys.modules, "numpy", MagicMock())
|
||||
|
||||
# Mock translators
|
||||
monkeypatch.setitem(sys.modules, "translators", MagicMock())
|
||||
|
||||
# Mock moviepy and submodules
|
||||
mock_moviepy = MagicMock()
|
||||
monkeypatch.setitem(sys.modules, "moviepy", mock_moviepy)
|
||||
monkeypatch.setitem(sys.modules, "moviepy.audio", MagicMock())
|
||||
monkeypatch.setitem(sys.modules, "moviepy.audio.AudioClip", MagicMock())
|
||||
monkeypatch.setitem(sys.modules, "moviepy.audio.fx", MagicMock())
|
||||
|
||||
# Clear cached imports to force reimport with mocks
|
||||
for mod_name in list(sys.modules.keys()):
|
||||
if mod_name.startswith("TTS."):
|
||||
del sys.modules[mod_name]
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# GTTS
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestGTTS:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
pass
|
||||
|
||||
def test_init(self):
|
||||
from TTS.GTTS import GTTS
|
||||
|
||||
engine = GTTS()
|
||||
assert engine.max_chars == 5000
|
||||
assert engine.voices == []
|
||||
|
||||
def test_run_saves_file(self, tmp_path):
|
||||
from TTS.GTTS import GTTS
|
||||
|
||||
engine = GTTS()
|
||||
filepath = str(tmp_path / "test.mp3")
|
||||
|
||||
with patch("TTS.GTTS.gTTS") as MockGTTS:
|
||||
mock_tts_instance = MagicMock()
|
||||
MockGTTS.return_value = mock_tts_instance
|
||||
|
||||
engine.run("Hello world", filepath)
|
||||
|
||||
MockGTTS.assert_called_once_with(text="Hello world", lang="vi", slow=False)
|
||||
mock_tts_instance.save.assert_called_once_with(filepath)
|
||||
|
||||
def test_run_uses_config_lang(self, mock_config):
|
||||
from TTS.GTTS import GTTS
|
||||
|
||||
mock_config["threads"]["thread"]["post_lang"] = "en"
|
||||
engine = GTTS()
|
||||
|
||||
with patch("TTS.GTTS.gTTS") as MockGTTS:
|
||||
MockGTTS.return_value = MagicMock()
|
||||
engine.run("test", "/tmp/test.mp3")
|
||||
MockGTTS.assert_called_once_with(text="test", lang="en", slow=False)
|
||||
|
||||
def test_randomvoice_returns_from_list(self):
|
||||
from TTS.GTTS import GTTS
|
||||
|
||||
engine = GTTS()
|
||||
engine.voices = ["voice1", "voice2", "voice3"]
|
||||
voice = engine.randomvoice()
|
||||
assert voice in engine.voices
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# TTSEngine
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestTTSEngine:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
pass
|
||||
|
||||
def test_init_creates_paths(self, sample_thread_object):
|
||||
from TTS.engine_wrapper import TTSEngine
|
||||
|
||||
mock_module = MagicMock
|
||||
engine = TTSEngine(
|
||||
tts_module=mock_module,
|
||||
reddit_object=sample_thread_object,
|
||||
path="assets/temp/",
|
||||
max_length=50,
|
||||
)
|
||||
assert engine.redditid == "test_thread_123"
|
||||
assert "test_thread_123/mp3" in engine.path
|
||||
|
||||
def test_add_periods_removes_urls(self, sample_thread_object):
|
||||
from TTS.engine_wrapper import TTSEngine
|
||||
|
||||
sample_thread_object["comments"] = [
|
||||
{
|
||||
"comment_body": "Check https://example.com and more\nAnother line",
|
||||
"comment_id": "c1",
|
||||
"comment_url": "",
|
||||
"comment_author": "@user",
|
||||
}
|
||||
]
|
||||
|
||||
mock_module = MagicMock
|
||||
engine = TTSEngine(
|
||||
tts_module=mock_module,
|
||||
reddit_object=sample_thread_object,
|
||||
path="assets/temp/",
|
||||
)
|
||||
engine.add_periods()
|
||||
body = sample_thread_object["comments"][0]["comment_body"]
|
||||
assert "https://" not in body
|
||||
# Newlines should be replaced with ". "
|
||||
assert "\n" not in body
|
||||
@ -0,0 +1,257 @@
|
||||
"""
|
||||
Integration tests for upload pipeline — verifying the UploadManager
|
||||
orchestrates multi-platform uploads correctly with mocked external APIs.
|
||||
"""
|
||||
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from uploaders.base_uploader import VideoMetadata
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Full upload pipeline integration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestUploadPipelineIntegration:
|
||||
"""Test the full upload_to_all flow with all platforms enabled."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config, sample_video_file):
|
||||
self.video_path = sample_video_file
|
||||
# Enable all uploaders
|
||||
mock_config["uploaders"]["youtube"]["enabled"] = True
|
||||
mock_config["uploaders"]["tiktok"]["enabled"] = True
|
||||
mock_config["uploaders"]["facebook"]["enabled"] = True
|
||||
|
||||
def test_all_platforms_succeed(self, mock_config):
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
|
||||
# Replace all uploaders with mocks
|
||||
for platform in manager.uploaders:
|
||||
mock_up = MagicMock()
|
||||
mock_up.safe_upload.return_value = f"https://{platform}.com/video123"
|
||||
manager.uploaders[platform] = mock_up
|
||||
|
||||
results = manager.upload_to_all(
|
||||
video_path=self.video_path,
|
||||
title="Integration Test Video",
|
||||
description="Testing upload pipeline",
|
||||
tags=["test"],
|
||||
hashtags=["integration"],
|
||||
)
|
||||
|
||||
assert len(results) == 3
|
||||
assert all(url is not None for url in results.values())
|
||||
|
||||
def test_partial_platform_failure(self, mock_config):
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
|
||||
for platform in manager.uploaders:
|
||||
mock_up = MagicMock()
|
||||
if platform == "tiktok":
|
||||
mock_up.safe_upload.return_value = None # TikTok fails
|
||||
else:
|
||||
mock_up.safe_upload.return_value = f"https://{platform}.com/v"
|
||||
manager.uploaders[platform] = mock_up
|
||||
|
||||
results = manager.upload_to_all(
|
||||
video_path=self.video_path,
|
||||
title="Partial Test",
|
||||
)
|
||||
|
||||
assert results["tiktok"] is None
|
||||
# Other platforms should still succeed
|
||||
success_count = sum(1 for v in results.values() if v is not None)
|
||||
assert success_count >= 1
|
||||
|
||||
def test_metadata_is_correct(self, mock_config):
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
|
||||
captured_metadata = {}
|
||||
for platform in manager.uploaders:
|
||||
mock_up = MagicMock()
|
||||
|
||||
def capture(m, name=platform):
|
||||
captured_metadata[name] = m
|
||||
return f"https://{name}.com/v"
|
||||
|
||||
mock_up.safe_upload.side_effect = capture
|
||||
manager.uploaders[platform] = mock_up
|
||||
|
||||
manager.upload_to_all(
|
||||
video_path=self.video_path,
|
||||
title="Metadata Test",
|
||||
description="Test desc",
|
||||
tags=["tag1"],
|
||||
hashtags=["hash1"],
|
||||
privacy="private",
|
||||
)
|
||||
|
||||
for name, m in captured_metadata.items():
|
||||
assert isinstance(m, VideoMetadata)
|
||||
assert m.title == "Metadata Test"
|
||||
assert m.description == "Test desc"
|
||||
assert m.privacy == "private"
|
||||
assert "hash1" in m.hashtags
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# YouTube upload integration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestYouTubeUploadIntegration:
|
||||
"""Test YouTube upload flow with mocked requests."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config, sample_video_file):
|
||||
mock_config["uploaders"]["youtube"]["enabled"] = True
|
||||
self.video_path = sample_video_file
|
||||
|
||||
def test_full_youtube_upload_flow(self):
|
||||
from uploaders.youtube_uploader import YouTubeUploader
|
||||
|
||||
uploader = YouTubeUploader()
|
||||
|
||||
with patch("uploaders.youtube_uploader.requests.post") as mock_post, \
|
||||
patch("uploaders.youtube_uploader.requests.put") as mock_put:
|
||||
|
||||
# Auth response
|
||||
auth_resp = MagicMock()
|
||||
auth_resp.json.return_value = {"access_token": "yt_token"}
|
||||
auth_resp.raise_for_status = MagicMock()
|
||||
|
||||
# Init upload response
|
||||
init_resp = MagicMock()
|
||||
init_resp.headers = {"Location": "https://upload.youtube.com/session123"}
|
||||
init_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_post.side_effect = [auth_resp, init_resp]
|
||||
|
||||
# Upload response
|
||||
upload_resp = MagicMock()
|
||||
upload_resp.json.return_value = {"id": "yt_video_id_123"}
|
||||
upload_resp.raise_for_status = MagicMock()
|
||||
mock_put.return_value = upload_resp
|
||||
|
||||
uploader.authenticate()
|
||||
m = VideoMetadata(file_path=self.video_path, title="YT Test")
|
||||
url = uploader.upload(m)
|
||||
|
||||
assert url == "https://www.youtube.com/watch?v=yt_video_id_123"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# TikTok upload integration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestTikTokUploadIntegration:
|
||||
"""Test TikTok upload flow with mocked requests."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config, sample_video_file):
|
||||
mock_config["uploaders"]["tiktok"]["enabled"] = True
|
||||
self.video_path = sample_video_file
|
||||
|
||||
def test_full_tiktok_upload_flow(self):
|
||||
from uploaders.tiktok_uploader import TikTokUploader
|
||||
|
||||
uploader = TikTokUploader()
|
||||
|
||||
with patch("uploaders.tiktok_uploader.requests.post") as mock_post, \
|
||||
patch("uploaders.tiktok_uploader.requests.put") as mock_put, \
|
||||
patch("uploaders.tiktok_uploader.time.sleep"):
|
||||
|
||||
# Auth response
|
||||
auth_resp = MagicMock()
|
||||
auth_resp.json.return_value = {"data": {"access_token": "tt_token"}}
|
||||
auth_resp.raise_for_status = MagicMock()
|
||||
|
||||
# Init upload response
|
||||
init_resp = MagicMock()
|
||||
init_resp.json.return_value = {
|
||||
"data": {"publish_id": "pub_123", "upload_url": "https://upload.tiktok.com/xyz"}
|
||||
}
|
||||
init_resp.raise_for_status = MagicMock()
|
||||
|
||||
# Status check response
|
||||
status_resp = MagicMock()
|
||||
status_resp.json.return_value = {"data": {"status": "PUBLISH_COMPLETE"}}
|
||||
|
||||
mock_post.side_effect = [auth_resp, init_resp, status_resp]
|
||||
mock_put.return_value = MagicMock(raise_for_status=MagicMock())
|
||||
|
||||
uploader.authenticate()
|
||||
m = VideoMetadata(file_path=self.video_path, title="TT Test")
|
||||
url = uploader.upload(m)
|
||||
|
||||
assert url is not None
|
||||
assert url.startswith("https://www.tiktok.com/")
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# Facebook upload integration
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestFacebookUploadIntegration:
|
||||
"""Test Facebook upload flow with mocked requests."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config, sample_video_file):
|
||||
mock_config["uploaders"]["facebook"]["enabled"] = True
|
||||
self.video_path = sample_video_file
|
||||
|
||||
def test_full_facebook_upload_flow(self):
|
||||
from uploaders.facebook_uploader import FacebookUploader
|
||||
|
||||
uploader = FacebookUploader()
|
||||
|
||||
with patch("uploaders.facebook_uploader.requests.get") as mock_get, \
|
||||
patch("uploaders.facebook_uploader.requests.post") as mock_post:
|
||||
|
||||
# Auth verify response
|
||||
auth_resp = MagicMock()
|
||||
auth_resp.json.return_value = {"id": "page_123", "name": "Test Page"}
|
||||
auth_resp.raise_for_status = MagicMock()
|
||||
mock_get.return_value = auth_resp
|
||||
|
||||
# Init upload
|
||||
init_resp = MagicMock()
|
||||
init_resp.json.return_value = {
|
||||
"upload_session_id": "sess_123",
|
||||
"video_id": "vid_456",
|
||||
}
|
||||
init_resp.raise_for_status = MagicMock()
|
||||
|
||||
# Transfer chunk
|
||||
transfer_resp = MagicMock()
|
||||
transfer_resp.json.return_value = {
|
||||
"start_offset": str(1024), # File is 1KB, so this ends transfer
|
||||
"end_offset": str(1024),
|
||||
}
|
||||
transfer_resp.raise_for_status = MagicMock()
|
||||
|
||||
# Finish
|
||||
finish_resp = MagicMock()
|
||||
finish_resp.json.return_value = {"success": True}
|
||||
finish_resp.raise_for_status = MagicMock()
|
||||
|
||||
mock_post.side_effect = [init_resp, transfer_resp, finish_resp]
|
||||
|
||||
uploader.authenticate()
|
||||
m = VideoMetadata(file_path=self.video_path, title="FB Test")
|
||||
url = uploader.upload(m)
|
||||
|
||||
assert url is not None
|
||||
assert url.startswith("https://www.facebook.com/")
|
||||
@ -0,0 +1,406 @@
|
||||
"""
|
||||
Unit tests for uploaders — BaseUploader, YouTubeUploader, TikTokUploader,
|
||||
FacebookUploader, and UploadManager.
|
||||
|
||||
All external API calls are mocked.
|
||||
"""
|
||||
|
||||
import os
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
import requests
|
||||
|
||||
from uploaders.base_uploader import BaseUploader, VideoMetadata
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# VideoMetadata
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestVideoMetadata:
|
||||
def test_default_values(self):
|
||||
m = VideoMetadata(file_path="/tmp/video.mp4", title="Test")
|
||||
assert m.file_path == "/tmp/video.mp4"
|
||||
assert m.title == "Test"
|
||||
assert m.description == ""
|
||||
assert m.tags == []
|
||||
assert m.hashtags == []
|
||||
assert m.thumbnail_path is None
|
||||
assert m.schedule_time is None
|
||||
assert m.privacy == "public"
|
||||
assert m.category == "Entertainment"
|
||||
assert m.language == "vi"
|
||||
|
||||
def test_custom_values(self):
|
||||
m = VideoMetadata(
|
||||
file_path="/tmp/video.mp4",
|
||||
title="Custom Video",
|
||||
description="Desc",
|
||||
tags=["tag1"],
|
||||
hashtags=["hash1"],
|
||||
privacy="private",
|
||||
)
|
||||
assert m.description == "Desc"
|
||||
assert m.tags == ["tag1"]
|
||||
assert m.privacy == "private"
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# BaseUploader.validate_video
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestBaseUploaderValidation:
|
||||
"""Test validate_video on a concrete subclass."""
|
||||
|
||||
def _make_uploader(self):
|
||||
class ConcreteUploader(BaseUploader):
|
||||
platform_name = "Test"
|
||||
|
||||
def authenticate(self):
|
||||
return True
|
||||
|
||||
def upload(self, metadata):
|
||||
return "https://example.com/video"
|
||||
|
||||
return ConcreteUploader()
|
||||
|
||||
def test_valid_video(self, sample_video_file):
|
||||
uploader = self._make_uploader()
|
||||
m = VideoMetadata(file_path=sample_video_file, title="Test Video")
|
||||
assert uploader.validate_video(m) is True
|
||||
|
||||
def test_missing_file(self):
|
||||
uploader = self._make_uploader()
|
||||
m = VideoMetadata(file_path="/nonexistent/file.mp4", title="Test")
|
||||
assert uploader.validate_video(m) is False
|
||||
|
||||
def test_empty_file(self, tmp_path):
|
||||
empty_file = tmp_path / "empty.mp4"
|
||||
empty_file.write_bytes(b"")
|
||||
uploader = self._make_uploader()
|
||||
m = VideoMetadata(file_path=str(empty_file), title="Test")
|
||||
assert uploader.validate_video(m) is False
|
||||
|
||||
def test_missing_title(self, sample_video_file):
|
||||
uploader = self._make_uploader()
|
||||
m = VideoMetadata(file_path=sample_video_file, title="")
|
||||
assert uploader.validate_video(m) is False
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# BaseUploader.safe_upload
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestSafeUpload:
|
||||
def _make_uploader(self, upload_return=None, auth_return=True):
|
||||
class ConcreteUploader(BaseUploader):
|
||||
platform_name = "Test"
|
||||
|
||||
def authenticate(self):
|
||||
self._authenticated = auth_return
|
||||
return auth_return
|
||||
|
||||
def upload(self, metadata):
|
||||
return upload_return
|
||||
|
||||
return ConcreteUploader()
|
||||
|
||||
def test_successful_upload(self, sample_video_file):
|
||||
uploader = self._make_uploader(upload_return="https://example.com/v1")
|
||||
m = VideoMetadata(file_path=sample_video_file, title="Test Video")
|
||||
result = uploader.safe_upload(m, max_retries=1)
|
||||
assert result == "https://example.com/v1"
|
||||
|
||||
def test_failed_auth(self, sample_video_file):
|
||||
uploader = self._make_uploader(auth_return=False)
|
||||
m = VideoMetadata(file_path=sample_video_file, title="Test Video")
|
||||
result = uploader.safe_upload(m, max_retries=1)
|
||||
assert result is None
|
||||
|
||||
def test_retries_on_exception(self, sample_video_file):
|
||||
class FlakeyUploader(BaseUploader):
|
||||
platform_name = "Test"
|
||||
_call_count = 0
|
||||
|
||||
def authenticate(self):
|
||||
self._authenticated = True
|
||||
return True
|
||||
|
||||
def upload(self, metadata):
|
||||
self._call_count += 1
|
||||
if self._call_count < 3:
|
||||
raise Exception("Temporary failure")
|
||||
return "https://example.com/v1"
|
||||
|
||||
uploader = FlakeyUploader()
|
||||
m = VideoMetadata(file_path=sample_video_file, title="Test Video")
|
||||
with patch("time.sleep"):
|
||||
result = uploader.safe_upload(m, max_retries=3)
|
||||
assert result == "https://example.com/v1"
|
||||
|
||||
def test_fails_after_max_retries(self, sample_video_file):
|
||||
class AlwaysFailUploader(BaseUploader):
|
||||
platform_name = "Test"
|
||||
|
||||
def authenticate(self):
|
||||
self._authenticated = True
|
||||
return True
|
||||
|
||||
def upload(self, metadata):
|
||||
raise Exception("Always fails")
|
||||
|
||||
uploader = AlwaysFailUploader()
|
||||
m = VideoMetadata(file_path=sample_video_file, title="Test Video")
|
||||
with patch("time.sleep"):
|
||||
result = uploader.safe_upload(m, max_retries=2)
|
||||
assert result is None
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# YouTubeUploader
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestYouTubeUploader:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
mock_config["uploaders"]["youtube"]["enabled"] = True
|
||||
|
||||
def test_authenticate_success(self):
|
||||
from uploaders.youtube_uploader import YouTubeUploader
|
||||
|
||||
uploader = YouTubeUploader()
|
||||
with patch("uploaders.youtube_uploader.requests.post") as mock_post:
|
||||
mock_post.return_value = MagicMock(
|
||||
status_code=200,
|
||||
json=lambda: {"access_token": "yt_token_123"},
|
||||
raise_for_status=lambda: None,
|
||||
)
|
||||
assert uploader.authenticate() is True
|
||||
assert uploader.access_token == "yt_token_123"
|
||||
|
||||
def test_authenticate_missing_creds(self, mock_config):
|
||||
mock_config["uploaders"]["youtube"]["client_id"] = ""
|
||||
from uploaders.youtube_uploader import YouTubeUploader
|
||||
|
||||
uploader = YouTubeUploader()
|
||||
assert uploader.authenticate() is False
|
||||
|
||||
def test_authenticate_api_error(self):
|
||||
from uploaders.youtube_uploader import YouTubeUploader
|
||||
|
||||
uploader = YouTubeUploader()
|
||||
with patch("uploaders.youtube_uploader.requests.post") as mock_post:
|
||||
mock_post.side_effect = Exception("Auth failed")
|
||||
assert uploader.authenticate() is False
|
||||
|
||||
def test_upload_returns_none_without_token(self, sample_video_file):
|
||||
from uploaders.youtube_uploader import YouTubeUploader
|
||||
|
||||
uploader = YouTubeUploader()
|
||||
m = VideoMetadata(file_path=sample_video_file, title="Test")
|
||||
assert uploader.upload(m) is None
|
||||
|
||||
def test_category_id_mapping(self):
|
||||
from uploaders.youtube_uploader import YouTubeUploader
|
||||
|
||||
assert YouTubeUploader._get_category_id("Entertainment") == "24"
|
||||
assert YouTubeUploader._get_category_id("Gaming") == "20"
|
||||
assert YouTubeUploader._get_category_id("Unknown") == "24"
|
||||
|
||||
def test_build_description(self):
|
||||
from uploaders.youtube_uploader import YouTubeUploader
|
||||
|
||||
uploader = YouTubeUploader()
|
||||
m = VideoMetadata(
|
||||
file_path="/tmp/v.mp4",
|
||||
title="Test",
|
||||
description="Video description",
|
||||
hashtags=["trending", "viral"],
|
||||
)
|
||||
desc = uploader._build_description(m)
|
||||
assert "Video description" in desc
|
||||
assert "Threads Video Maker Bot" in desc
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# TikTokUploader
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestTikTokUploader:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
mock_config["uploaders"]["tiktok"]["enabled"] = True
|
||||
|
||||
def test_authenticate_success(self):
|
||||
from uploaders.tiktok_uploader import TikTokUploader
|
||||
|
||||
uploader = TikTokUploader()
|
||||
with patch("uploaders.tiktok_uploader.requests.post") as mock_post:
|
||||
mock_post.return_value = MagicMock(
|
||||
status_code=200,
|
||||
json=lambda: {"data": {"access_token": "tt_token_123"}},
|
||||
raise_for_status=lambda: None,
|
||||
)
|
||||
assert uploader.authenticate() is True
|
||||
assert uploader.access_token == "tt_token_123"
|
||||
|
||||
def test_authenticate_no_token_in_response(self):
|
||||
from uploaders.tiktok_uploader import TikTokUploader
|
||||
|
||||
uploader = TikTokUploader()
|
||||
with patch("uploaders.tiktok_uploader.requests.post") as mock_post:
|
||||
mock_post.return_value = MagicMock(
|
||||
status_code=200,
|
||||
json=lambda: {"data": {}},
|
||||
raise_for_status=lambda: None,
|
||||
)
|
||||
assert uploader.authenticate() is False
|
||||
|
||||
def test_privacy_mapping(self):
|
||||
from uploaders.tiktok_uploader import TikTokUploader
|
||||
|
||||
assert TikTokUploader._map_privacy("public") == "PUBLIC_TO_EVERYONE"
|
||||
assert TikTokUploader._map_privacy("private") == "SELF_ONLY"
|
||||
assert TikTokUploader._map_privacy("friends") == "MUTUAL_FOLLOW_FRIENDS"
|
||||
assert TikTokUploader._map_privacy("unknown") == "PUBLIC_TO_EVERYONE"
|
||||
|
||||
def test_build_caption(self):
|
||||
from uploaders.tiktok_uploader import TikTokUploader
|
||||
|
||||
uploader = TikTokUploader()
|
||||
m = VideoMetadata(
|
||||
file_path="/tmp/v.mp4",
|
||||
title="Test Video Title",
|
||||
hashtags=["viral", "trending"],
|
||||
)
|
||||
caption = uploader._build_caption(m)
|
||||
assert "Test Video Title" in caption
|
||||
assert "#viral" in caption
|
||||
assert "#trending" in caption
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# FacebookUploader
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestFacebookUploader:
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup(self, mock_config):
|
||||
mock_config["uploaders"]["facebook"]["enabled"] = True
|
||||
|
||||
def test_authenticate_success(self):
|
||||
from uploaders.facebook_uploader import FacebookUploader
|
||||
|
||||
uploader = FacebookUploader()
|
||||
with patch("uploaders.facebook_uploader.requests.get") as mock_get:
|
||||
mock_get.return_value = MagicMock(
|
||||
status_code=200,
|
||||
json=lambda: {"id": "page_123", "name": "Test Page"},
|
||||
raise_for_status=lambda: None,
|
||||
)
|
||||
assert uploader.authenticate() is True
|
||||
|
||||
def test_authenticate_missing_token(self, mock_config):
|
||||
mock_config["uploaders"]["facebook"]["access_token"] = ""
|
||||
from uploaders.facebook_uploader import FacebookUploader
|
||||
|
||||
uploader = FacebookUploader()
|
||||
assert uploader.authenticate() is False
|
||||
|
||||
def test_authenticate_missing_page_id(self, mock_config):
|
||||
mock_config["uploaders"]["facebook"]["page_id"] = ""
|
||||
from uploaders.facebook_uploader import FacebookUploader
|
||||
|
||||
uploader = FacebookUploader()
|
||||
assert uploader.authenticate() is False
|
||||
|
||||
def test_build_description(self):
|
||||
from uploaders.facebook_uploader import FacebookUploader
|
||||
|
||||
uploader = FacebookUploader()
|
||||
m = VideoMetadata(
|
||||
file_path="/tmp/v.mp4",
|
||||
title="Test",
|
||||
description="Some description",
|
||||
hashtags=["viral"],
|
||||
)
|
||||
desc = uploader._build_description(m)
|
||||
assert "Some description" in desc
|
||||
assert "#viral" in desc
|
||||
assert "Threads Video Maker Bot" in desc
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# UploadManager
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestUploadManager:
|
||||
def test_no_uploaders_when_disabled(self, mock_config):
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
assert len(manager.uploaders) == 0
|
||||
|
||||
def test_upload_to_all_empty(self, mock_config, sample_video_file):
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
results = manager.upload_to_all(
|
||||
video_path=sample_video_file,
|
||||
title="Test",
|
||||
)
|
||||
assert results == {}
|
||||
|
||||
def test_upload_to_platform_not_enabled(self, mock_config, sample_video_file):
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
m = VideoMetadata(file_path=sample_video_file, title="Test")
|
||||
result = manager.upload_to_platform("youtube", m)
|
||||
assert result is None
|
||||
|
||||
def test_default_hashtags(self):
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
hashtags = UploadManager._default_hashtags()
|
||||
assert "threads" in hashtags
|
||||
assert "viral" in hashtags
|
||||
assert "vietnam" in hashtags
|
||||
|
||||
def test_init_with_enabled_uploaders(self, mock_config):
|
||||
mock_config["uploaders"]["youtube"]["enabled"] = True
|
||||
mock_config["uploaders"]["tiktok"]["enabled"] = True
|
||||
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
assert "youtube" in manager.uploaders
|
||||
assert "tiktok" in manager.uploaders
|
||||
assert "facebook" not in manager.uploaders
|
||||
|
||||
def test_upload_to_all_with_mocked_uploaders(self, mock_config, sample_video_file):
|
||||
mock_config["uploaders"]["youtube"]["enabled"] = True
|
||||
|
||||
from uploaders.upload_manager import UploadManager
|
||||
|
||||
manager = UploadManager()
|
||||
|
||||
# Mock the youtube uploader's safe_upload
|
||||
mock_uploader = MagicMock()
|
||||
mock_uploader.safe_upload.return_value = "https://youtube.com/watch?v=test"
|
||||
manager.uploaders["youtube"] = mock_uploader
|
||||
|
||||
results = manager.upload_to_all(
|
||||
video_path=sample_video_file,
|
||||
title="Test Video",
|
||||
description="Test Description",
|
||||
)
|
||||
assert results["youtube"] == "https://youtube.com/watch?v=test"
|
||||
@ -0,0 +1,71 @@
|
||||
"""
|
||||
Unit tests for utils/videos.py — Video deduplication and metadata storage.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
from unittest.mock import mock_open, patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
class TestCheckDone:
|
||||
def test_returns_id_when_not_done(self, mock_config, tmp_path):
|
||||
from utils.videos import check_done
|
||||
|
||||
videos_data = json.dumps([])
|
||||
with patch("builtins.open", mock_open(read_data=videos_data)):
|
||||
result = check_done("new_thread_id")
|
||||
assert result == "new_thread_id"
|
||||
|
||||
def test_returns_none_when_already_done(self, mock_config, tmp_path):
|
||||
from utils.videos import check_done
|
||||
|
||||
videos_data = json.dumps([{"id": "existing_id", "subreddit": "test"}])
|
||||
with patch("builtins.open", mock_open(read_data=videos_data)):
|
||||
result = check_done("existing_id")
|
||||
assert result is None
|
||||
|
||||
def test_returns_obj_when_post_id_specified(self, mock_config):
|
||||
from utils.videos import check_done
|
||||
|
||||
mock_config["threads"]["thread"]["post_id"] = "specific_post"
|
||||
videos_data = json.dumps([{"id": "existing_id", "subreddit": "test"}])
|
||||
with patch("builtins.open", mock_open(read_data=videos_data)):
|
||||
result = check_done("existing_id")
|
||||
assert result == "existing_id"
|
||||
|
||||
|
||||
class TestSaveData:
|
||||
def test_saves_video_metadata(self, mock_config, tmp_path):
|
||||
from utils.videos import save_data
|
||||
|
||||
videos_file = str(tmp_path / "videos.json")
|
||||
with open(videos_file, "w", encoding="utf-8") as f:
|
||||
json.dump([], f)
|
||||
|
||||
m = mock_open(read_data=json.dumps([]))
|
||||
m.return_value.seek = lambda pos: None
|
||||
|
||||
with patch("builtins.open", m):
|
||||
save_data("test_channel", "output.mp4", "Test Title", "thread_123", "minecraft")
|
||||
|
||||
# Verify write was called with the new data
|
||||
write_calls = m().write.call_args_list
|
||||
assert len(write_calls) > 0
|
||||
written_data = "".join(call.args[0] for call in write_calls)
|
||||
parsed = json.loads(written_data)
|
||||
assert len(parsed) == 1
|
||||
assert parsed[0]["id"] == "thread_123"
|
||||
|
||||
def test_skips_duplicate_id(self, mock_config):
|
||||
from utils.videos import save_data
|
||||
|
||||
existing = [{"id": "thread_123", "subreddit": "test", "time": "1000",
|
||||
"background_credit": "", "reddit_title": "", "filename": ""}]
|
||||
m = mock_open(read_data=json.dumps(existing))
|
||||
with patch("builtins.open", m):
|
||||
save_data("test_channel", "output2.mp4", "Another Title", "thread_123", "gta")
|
||||
|
||||
# Verify no new data was written (duplicate ID skipped)
|
||||
assert not m().write.called
|
||||
@ -0,0 +1,147 @@
|
||||
"""
|
||||
Unit tests for utils/voice.py — Text sanitization and rate-limit handling.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# sanitize_text
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestSanitizeText:
|
||||
"""Tests for sanitize_text — text cleaning for TTS input."""
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def _setup_config(self, mock_config):
|
||||
"""Ensure settings.config is available."""
|
||||
pass
|
||||
|
||||
def test_removes_urls(self):
|
||||
from utils.voice import sanitize_text
|
||||
|
||||
text = "Check out https://example.com and http://test.org for more info"
|
||||
result = sanitize_text(text)
|
||||
assert "https://" not in result
|
||||
assert "http://" not in result
|
||||
assert "example.com" not in result
|
||||
|
||||
def test_removes_special_characters(self):
|
||||
from utils.voice import sanitize_text
|
||||
|
||||
text = "Hello @user! This is #awesome & great"
|
||||
result = sanitize_text(text)
|
||||
assert "@" not in result
|
||||
assert "#" not in result
|
||||
|
||||
def test_replaces_plus_and_ampersand(self):
|
||||
from utils.voice import sanitize_text
|
||||
|
||||
text = "1+1 equals 2"
|
||||
result = sanitize_text(text)
|
||||
# Verify numeric content is preserved after sanitization
|
||||
assert "1" in result
|
||||
assert "equals" in result
|
||||
|
||||
def test_removes_extra_whitespace(self):
|
||||
from utils.voice import sanitize_text
|
||||
|
||||
text = "Hello world test"
|
||||
result = sanitize_text(text)
|
||||
assert " " not in result
|
||||
|
||||
def test_preserves_normal_text(self):
|
||||
from utils.voice import sanitize_text
|
||||
|
||||
text = "This is a normal sentence without special characters"
|
||||
result = sanitize_text(text)
|
||||
# clean() with no_emojis=True may lowercase the text
|
||||
# The important thing is word content is preserved
|
||||
assert "normal" in result.lower()
|
||||
assert "sentence" in result.lower()
|
||||
assert "special" in result.lower()
|
||||
|
||||
def test_handles_empty_string(self):
|
||||
from utils.voice import sanitize_text
|
||||
|
||||
result = sanitize_text("")
|
||||
assert result == ""
|
||||
|
||||
def test_handles_unicode_text(self):
|
||||
from utils.voice import sanitize_text
|
||||
|
||||
text = "Xin chao the gioi"
|
||||
result = sanitize_text(text)
|
||||
# clean() may transliterate unicode characters
|
||||
assert "chao" in result.lower() or "xin" in result.lower()
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# check_ratelimit
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestCheckRateLimit:
|
||||
def test_returns_true_for_normal_response(self):
|
||||
from utils.voice import check_ratelimit
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 200
|
||||
assert check_ratelimit(mock_response) is True
|
||||
|
||||
def test_returns_false_for_429(self):
|
||||
from utils.voice import check_ratelimit
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 429
|
||||
mock_response.headers = {} # No rate limit header → falls to KeyError
|
||||
assert check_ratelimit(mock_response) is False
|
||||
|
||||
def test_handles_429_with_header(self):
|
||||
import time as pytime
|
||||
|
||||
from utils.voice import check_ratelimit
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 429
|
||||
# Set reset time to just before now so sleep is tiny
|
||||
mock_response.headers = {"X-RateLimit-Reset": str(int(pytime.time()) + 1)}
|
||||
with patch("utils.voice.sleep") as mock_sleep:
|
||||
result = check_ratelimit(mock_response)
|
||||
assert result is False
|
||||
|
||||
def test_returns_true_for_non_429_error(self):
|
||||
from utils.voice import check_ratelimit
|
||||
|
||||
mock_response = MagicMock()
|
||||
mock_response.status_code = 500
|
||||
assert check_ratelimit(mock_response) is True
|
||||
|
||||
|
||||
# ===================================================================
|
||||
# sleep_until
|
||||
# ===================================================================
|
||||
|
||||
|
||||
class TestSleepUntil:
|
||||
def test_raises_for_non_numeric(self):
|
||||
from utils.voice import sleep_until
|
||||
|
||||
with pytest.raises(Exception, match="not a number"):
|
||||
sleep_until("not a timestamp")
|
||||
|
||||
def test_returns_immediately_for_past_time(self):
|
||||
from utils.voice import sleep_until
|
||||
|
||||
# A past timestamp should return immediately without long sleep
|
||||
sleep_until(0) # epoch 0 is in the past
|
||||
|
||||
def test_accepts_datetime(self):
|
||||
from utils.voice import sleep_until
|
||||
|
||||
past_dt = datetime(2000, 1, 1)
|
||||
sleep_until(past_dt) # Should return immediately
|
||||
@ -0,0 +1,330 @@
|
||||
"""
|
||||
Google Trends Integration - Lấy từ khóa trending từ Google Trends.
|
||||
|
||||
Sử dụng RSS feed công khai của Google Trends để lấy các từ khóa
|
||||
đang thịnh hành tại Việt Nam, sau đó dùng các từ khóa này để tìm
|
||||
bài viết trên Threads.
|
||||
|
||||
Flow:
|
||||
1. Lấy trending keywords từ Google Trends RSS (geo=VN)
|
||||
2. Dùng Playwright tìm bài viết trên Threads theo từ khóa
|
||||
3. Trả về danh sách bài viết phù hợp
|
||||
"""
|
||||
|
||||
import xml.etree.ElementTree as ET
|
||||
from typing import Dict, List, Optional
|
||||
from urllib.parse import quote_plus
|
||||
|
||||
import requests
|
||||
from playwright.sync_api import (
|
||||
TimeoutError as PlaywrightTimeoutError,
|
||||
sync_playwright,
|
||||
)
|
||||
|
||||
from utils.console import print_step, print_substep
|
||||
|
||||
# Google Trends daily trending RSS endpoint
|
||||
_GOOGLE_TRENDS_RSS_URL = "https://trends.google.com/trends/trendingsearches/daily/rss"
|
||||
_RSS_REQUEST_TIMEOUT = 15
|
||||
|
||||
# Playwright settings (reuse from trending.py)
|
||||
_PAGE_LOAD_TIMEOUT_MS = 30_000
|
||||
_CONTENT_WAIT_MS = 3_000
|
||||
_SCROLL_ITERATIONS = 3
|
||||
|
||||
_BROWSER_VIEWPORT = {"width": 1280, "height": 900}
|
||||
_BROWSER_USER_AGENT = (
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) "
|
||||
"Chrome/131.0.0.0 Safari/537.36"
|
||||
)
|
||||
_BROWSER_LOCALE = "vi-VN"
|
||||
|
||||
# Threads search URL template
|
||||
_THREADS_SEARCH_URL = "https://www.threads.net/search?q={query}&serp_type=default"
|
||||
|
||||
|
||||
class GoogleTrendsError(Exception):
|
||||
"""Lỗi khi lấy dữ liệu từ Google Trends."""
|
||||
|
||||
|
||||
def get_google_trending_keywords(
|
||||
geo: str = "VN",
|
||||
limit: int = 10,
|
||||
) -> List[Dict[str, str]]:
|
||||
"""Lấy danh sách từ khóa trending từ Google Trends RSS feed.
|
||||
|
||||
Args:
|
||||
geo: Mã quốc gia (mặc định: VN cho Việt Nam).
|
||||
limit: Số từ khóa tối đa cần lấy.
|
||||
|
||||
Returns:
|
||||
Danh sách dict chứa ``{title, traffic, news_url}``.
|
||||
|
||||
Raises:
|
||||
GoogleTrendsError: Nếu không thể lấy dữ liệu từ Google Trends.
|
||||
"""
|
||||
print_substep(
|
||||
f"🔍 Đang lấy từ khóa trending từ Google Trends (geo={geo})...",
|
||||
style="bold blue",
|
||||
)
|
||||
|
||||
url = f"{_GOOGLE_TRENDS_RSS_URL}?geo={geo}"
|
||||
try:
|
||||
response = requests.get(url, timeout=_RSS_REQUEST_TIMEOUT)
|
||||
response.raise_for_status()
|
||||
except requests.RequestException as exc:
|
||||
raise GoogleTrendsError(
|
||||
f"Không thể kết nối Google Trends RSS: {exc}"
|
||||
) from exc
|
||||
|
||||
try:
|
||||
root = ET.fromstring(response.content)
|
||||
except ET.ParseError as exc:
|
||||
raise GoogleTrendsError(
|
||||
f"Không thể parse Google Trends RSS XML: {exc}"
|
||||
) from exc
|
||||
|
||||
# RSS structure: <rss><channel><item>...</item></channel></rss>
|
||||
# Google Trends uses ht: namespace for traffic data
|
||||
namespaces = {"ht": "https://trends.google.com/trends/trendingsearches/daily"}
|
||||
|
||||
keywords: List[Dict[str, str]] = []
|
||||
for item in root.iter("item"):
|
||||
if len(keywords) >= limit:
|
||||
break
|
||||
|
||||
title_elem = item.find("title")
|
||||
title = title_elem.text.strip() if title_elem is not None and title_elem.text else ""
|
||||
if not title:
|
||||
continue
|
||||
|
||||
# Approximate traffic (e.g., "200,000+")
|
||||
traffic_elem = item.find("ht:approx_traffic", namespaces)
|
||||
traffic = traffic_elem.text.strip() if traffic_elem is not None and traffic_elem.text else ""
|
||||
|
||||
# News item URL (optional)
|
||||
news_url = ""
|
||||
news_item = item.find("ht:news_item", namespaces)
|
||||
if news_item is not None:
|
||||
news_url_elem = news_item.find("ht:news_item_url", namespaces)
|
||||
news_url = (
|
||||
news_url_elem.text.strip()
|
||||
if news_url_elem is not None and news_url_elem.text
|
||||
else ""
|
||||
)
|
||||
|
||||
keywords.append({
|
||||
"title": title,
|
||||
"traffic": traffic,
|
||||
"news_url": news_url,
|
||||
})
|
||||
|
||||
if not keywords:
|
||||
raise GoogleTrendsError(
|
||||
f"Không tìm thấy từ khóa trending nào từ Google Trends (geo={geo})."
|
||||
)
|
||||
|
||||
kw_preview = ", ".join(k["title"][:30] for k in keywords[:5])
|
||||
suffix = "..." if len(keywords) > 5 else ""
|
||||
print_substep(
|
||||
f"✅ Tìm thấy {len(keywords)} từ khóa trending: {kw_preview}{suffix}",
|
||||
style="bold green",
|
||||
)
|
||||
return keywords
|
||||
|
||||
|
||||
def search_threads_by_query(
|
||||
query: str,
|
||||
max_threads: int = 10,
|
||||
) -> List[Dict[str, str]]:
|
||||
"""Tìm bài viết trên Threads theo từ khóa bằng Playwright.
|
||||
|
||||
Mở trang tìm kiếm Threads và trích xuất bài viết từ kết quả.
|
||||
|
||||
Args:
|
||||
query: Từ khóa tìm kiếm.
|
||||
max_threads: Số bài viết tối đa cần lấy.
|
||||
|
||||
Returns:
|
||||
Danh sách thread dicts: ``{text, username, permalink, shortcode, keyword}``.
|
||||
"""
|
||||
import re
|
||||
|
||||
search_url = _THREADS_SEARCH_URL.format(query=quote_plus(query))
|
||||
threads: List[Dict[str, str]] = []
|
||||
|
||||
with sync_playwright() as p:
|
||||
browser = p.chromium.launch(headless=True)
|
||||
context = browser.new_context(
|
||||
viewport=_BROWSER_VIEWPORT,
|
||||
user_agent=_BROWSER_USER_AGENT,
|
||||
locale=_BROWSER_LOCALE,
|
||||
)
|
||||
page = context.new_page()
|
||||
|
||||
try:
|
||||
page.goto(search_url, timeout=_PAGE_LOAD_TIMEOUT_MS)
|
||||
page.wait_for_load_state("domcontentloaded", timeout=_PAGE_LOAD_TIMEOUT_MS)
|
||||
page.wait_for_timeout(_CONTENT_WAIT_MS)
|
||||
|
||||
# Scroll to load more content
|
||||
for _ in range(_SCROLL_ITERATIONS):
|
||||
page.evaluate("window.scrollBy(0, window.innerHeight)")
|
||||
page.wait_for_timeout(1000)
|
||||
|
||||
# Extract posts from search results
|
||||
seen_shortcodes: set = set()
|
||||
post_links = page.query_selector_all('a[href*="/post/"]')
|
||||
|
||||
for link in post_links:
|
||||
if len(threads) >= max_threads:
|
||||
break
|
||||
try:
|
||||
href = link.get_attribute("href") or ""
|
||||
sc_match = re.search(r"/post/([A-Za-z0-9_-]+)", href)
|
||||
if not sc_match:
|
||||
continue
|
||||
shortcode = sc_match.group(1)
|
||||
if shortcode in seen_shortcodes:
|
||||
continue
|
||||
seen_shortcodes.add(shortcode)
|
||||
|
||||
# Username from URL: /@username/post/...
|
||||
user_match = re.search(r"/@([^/]+)/post/", href)
|
||||
username = user_match.group(1) if user_match else "unknown"
|
||||
|
||||
# Get post text from parent container
|
||||
text = _get_post_text_from_link(link)
|
||||
if not text or len(text) < 10:
|
||||
continue
|
||||
|
||||
permalink = (
|
||||
f"https://www.threads.net{href}"
|
||||
if href.startswith("/")
|
||||
else href
|
||||
)
|
||||
threads.append({
|
||||
"text": text,
|
||||
"username": username,
|
||||
"permalink": permalink,
|
||||
"shortcode": shortcode,
|
||||
"keyword": query,
|
||||
})
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
except PlaywrightTimeoutError:
|
||||
print_substep(
|
||||
f"⚠️ Timeout khi tìm kiếm Threads cho từ khóa: {query}",
|
||||
style="bold yellow",
|
||||
)
|
||||
except Exception as exc:
|
||||
print_substep(
|
||||
f"⚠️ Lỗi tìm kiếm Threads cho '{query}': {exc}",
|
||||
style="bold yellow",
|
||||
)
|
||||
finally:
|
||||
browser.close()
|
||||
|
||||
return threads
|
||||
|
||||
|
||||
def _get_post_text_from_link(link_handle) -> str:
|
||||
"""Walk up the DOM from a link element to extract post text content."""
|
||||
try:
|
||||
container = link_handle.evaluate_handle(
|
||||
"""el => {
|
||||
let node = el;
|
||||
for (let i = 0; i < 10; i++) {
|
||||
node = node.parentElement;
|
||||
if (!node) return el.parentElement || el;
|
||||
const text = node.innerText || '';
|
||||
if (text.length > 30 && (
|
||||
node.getAttribute('role') === 'article' ||
|
||||
node.tagName === 'ARTICLE' ||
|
||||
node.dataset && node.dataset.testid
|
||||
)) {
|
||||
return node;
|
||||
}
|
||||
}
|
||||
return el.parentElement
|
||||
? el.parentElement.parentElement || el.parentElement
|
||||
: el;
|
||||
}"""
|
||||
)
|
||||
raw = container.inner_text().strip() if container else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
if not raw:
|
||||
return ""
|
||||
|
||||
# Clean: remove short metadata lines (timestamps, UI buttons, etc.)
|
||||
_skip = {"Trả lời", "Thích", "Chia sẻ", "Repost", "Quote", "...", "•"}
|
||||
cleaned_lines: list = []
|
||||
for line in raw.split("\n"):
|
||||
line = line.strip()
|
||||
if not line or len(line) < 3:
|
||||
continue
|
||||
if line in _skip:
|
||||
continue
|
||||
# Skip standalone @username lines
|
||||
if line.startswith("@") and " " not in line and len(line) < 30:
|
||||
continue
|
||||
cleaned_lines.append(line)
|
||||
return "\n".join(cleaned_lines)
|
||||
|
||||
|
||||
def get_threads_from_google_trends(
|
||||
geo: str = "VN",
|
||||
max_keywords: int = 5,
|
||||
max_threads_per_keyword: int = 10,
|
||||
) -> List[Dict[str, str]]:
|
||||
"""Lấy bài viết Threads dựa trên từ khóa trending từ Google Trends.
|
||||
|
||||
Kết hợp Google Trends + Threads search:
|
||||
1. Lấy từ khóa trending từ Google Trends
|
||||
2. Tìm bài viết trên Threads theo từng từ khóa
|
||||
|
||||
Args:
|
||||
geo: Mã quốc gia cho Google Trends.
|
||||
max_keywords: Số từ khóa tối đa cần duyệt.
|
||||
max_threads_per_keyword: Số bài viết tối đa từ mỗi từ khóa.
|
||||
|
||||
Returns:
|
||||
Danh sách thread dicts.
|
||||
|
||||
Raises:
|
||||
GoogleTrendsError: Nếu không lấy được từ khóa từ Google Trends.
|
||||
"""
|
||||
print_step("🌐 Đang lấy bài viết từ Threads dựa trên Google Trends...")
|
||||
|
||||
keywords = get_google_trending_keywords(geo=geo, limit=max_keywords)
|
||||
all_threads: List[Dict[str, str]] = []
|
||||
|
||||
for kw in keywords:
|
||||
keyword_title = kw["title"]
|
||||
print_substep(
|
||||
f" 🔎 Đang tìm trên Threads: '{keyword_title}'...",
|
||||
style="bold blue",
|
||||
)
|
||||
found = search_threads_by_query(
|
||||
query=keyword_title,
|
||||
max_threads=max_threads_per_keyword,
|
||||
)
|
||||
all_threads.extend(found)
|
||||
print_substep(
|
||||
f" 📝 '{keyword_title}': {len(found)} bài viết",
|
||||
style="bold blue",
|
||||
)
|
||||
|
||||
# Stop early if we have enough threads
|
||||
if len(all_threads) >= max_threads_per_keyword * 2:
|
||||
break
|
||||
|
||||
print_substep(
|
||||
f"✅ Tổng cộng {len(all_threads)} bài viết từ Google Trends keywords",
|
||||
style="bold green",
|
||||
)
|
||||
return all_threads
|
||||
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,372 @@
|
||||
"""
|
||||
Threads Trending Scraper - Lấy bài viết từ mục "Trending now" trên Threads.
|
||||
|
||||
Threads API chính thức không cung cấp endpoint cho trending topics.
|
||||
Module này sử dụng Playwright để scrape nội dung trending từ giao diện web Threads.
|
||||
|
||||
Flow:
|
||||
1. Mở trang tìm kiếm Threads (https://www.threads.net/search)
|
||||
2. Trích xuất trending topic links
|
||||
3. Truy cập từng topic để lấy danh sách bài viết
|
||||
4. Truy cập bài viết để lấy replies (nếu cần)
|
||||
"""
|
||||
|
||||
import re
|
||||
from typing import Dict, List
|
||||
|
||||
from playwright.sync_api import (
|
||||
Page,
|
||||
TimeoutError as PlaywrightTimeoutError,
|
||||
sync_playwright,
|
||||
)
|
||||
|
||||
from utils.console import print_step, print_substep
|
||||
|
||||
THREADS_SEARCH_URL = "https://www.threads.net/search"
|
||||
_PAGE_LOAD_TIMEOUT_MS = 30_000
|
||||
_CONTENT_WAIT_MS = 3_000
|
||||
_REPLY_SCROLL_ITERATIONS = 5
|
||||
_TOPIC_SCROLL_ITERATIONS = 2
|
||||
|
||||
# Shared browser context settings
|
||||
_BROWSER_VIEWPORT = {"width": 1280, "height": 900}
|
||||
_BROWSER_USER_AGENT = (
|
||||
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) "
|
||||
"AppleWebKit/537.36 (KHTML, like Gecko) "
|
||||
"Chrome/131.0.0.0 Safari/537.36"
|
||||
)
|
||||
_BROWSER_LOCALE = "vi-VN"
|
||||
|
||||
|
||||
class TrendingScrapeError(Exception):
|
||||
"""Lỗi khi scrape trending content từ Threads."""
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Internal helpers
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
def _extract_topic_links(page: Page, limit: int) -> List[Dict[str, str]]:
|
||||
"""Extract trending topic links from the search page DOM."""
|
||||
topics: List[Dict[str, str]] = []
|
||||
elements = page.query_selector_all('a[href*="/search?q="]')
|
||||
for elem in elements:
|
||||
if len(topics) >= limit:
|
||||
break
|
||||
try:
|
||||
href = elem.get_attribute("href") or ""
|
||||
text = elem.inner_text().strip()
|
||||
if not text or not href:
|
||||
continue
|
||||
lines = [line.strip() for line in text.split("\n") if line.strip()]
|
||||
title = lines[0] if lines else ""
|
||||
if not title:
|
||||
continue
|
||||
url = f"https://www.threads.net{href}" if href.startswith("/") else href
|
||||
topics.append({"title": title, "url": url})
|
||||
except Exception:
|
||||
continue
|
||||
return topics
|
||||
|
||||
|
||||
def _extract_post_links(page: Page, limit: int) -> List[Dict[str, str]]:
|
||||
"""Extract thread post data from a page containing post links."""
|
||||
threads: List[Dict[str, str]] = []
|
||||
seen_shortcodes: set = set()
|
||||
|
||||
post_links = page.query_selector_all('a[href*="/post/"]')
|
||||
for link in post_links:
|
||||
if len(threads) >= limit:
|
||||
break
|
||||
try:
|
||||
href = link.get_attribute("href") or ""
|
||||
sc_match = re.search(r"/post/([A-Za-z0-9_-]+)", href)
|
||||
if not sc_match:
|
||||
continue
|
||||
shortcode = sc_match.group(1)
|
||||
if shortcode in seen_shortcodes:
|
||||
continue
|
||||
seen_shortcodes.add(shortcode)
|
||||
|
||||
# Username from URL: /@username/post/...
|
||||
user_match = re.search(r"/@([^/]+)/post/", href)
|
||||
username = user_match.group(1) if user_match else "unknown"
|
||||
|
||||
# Walk up the DOM to find a container with the post text
|
||||
text = _get_post_text(link)
|
||||
if not text or len(text) < 10:
|
||||
continue
|
||||
|
||||
permalink = (
|
||||
f"https://www.threads.net{href}" if href.startswith("/") else href
|
||||
)
|
||||
threads.append(
|
||||
{
|
||||
"text": text,
|
||||
"username": username,
|
||||
"permalink": permalink,
|
||||
"shortcode": shortcode,
|
||||
}
|
||||
)
|
||||
except Exception:
|
||||
continue
|
||||
return threads
|
||||
|
||||
|
||||
def _get_post_text(link_handle) -> str:
|
||||
"""Walk up the DOM from a link element to extract post text content."""
|
||||
try:
|
||||
container = link_handle.evaluate_handle(
|
||||
"""el => {
|
||||
let node = el;
|
||||
for (let i = 0; i < 10; i++) {
|
||||
node = node.parentElement;
|
||||
if (!node) return el.parentElement || el;
|
||||
const text = node.innerText || '';
|
||||
if (text.length > 30 && (
|
||||
node.getAttribute('role') === 'article' ||
|
||||
node.tagName === 'ARTICLE' ||
|
||||
node.dataset && node.dataset.testid
|
||||
)) {
|
||||
return node;
|
||||
}
|
||||
}
|
||||
return el.parentElement ? el.parentElement.parentElement || el.parentElement : el;
|
||||
}"""
|
||||
)
|
||||
raw = container.inner_text().strip() if container else ""
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
if not raw:
|
||||
return ""
|
||||
|
||||
# Clean: remove short metadata lines (timestamps, UI buttons, etc.)
|
||||
_skip = {"Trả lời", "Thích", "Chia sẻ", "Repost", "Quote", "...", "•"}
|
||||
cleaned_lines: list = []
|
||||
for line in raw.split("\n"):
|
||||
line = line.strip()
|
||||
if not line or len(line) < 3:
|
||||
continue
|
||||
if line in _skip:
|
||||
continue
|
||||
# Skip standalone @username lines
|
||||
if line.startswith("@") and " " not in line and len(line) < 30:
|
||||
continue
|
||||
cleaned_lines.append(line)
|
||||
return "\n".join(cleaned_lines)
|
||||
|
||||
|
||||
def _extract_replies(page: Page, limit: int) -> List[Dict[str, str]]:
|
||||
"""Extract replies from a thread detail page."""
|
||||
replies: List[Dict[str, str]] = []
|
||||
|
||||
# Scroll to load more replies
|
||||
for _ in range(_REPLY_SCROLL_ITERATIONS):
|
||||
page.evaluate("window.scrollBy(0, window.innerHeight)")
|
||||
page.wait_for_timeout(1000)
|
||||
|
||||
articles = page.query_selector_all('div[role="article"], article')
|
||||
for idx, article in enumerate(articles):
|
||||
if idx == 0:
|
||||
continue # Skip main post
|
||||
if len(replies) >= limit:
|
||||
break
|
||||
try:
|
||||
text = article.inner_text().strip()
|
||||
if not text or len(text) < 5:
|
||||
continue
|
||||
|
||||
# Username
|
||||
username_link = article.query_selector('a[href^="/@"]')
|
||||
username = "unknown"
|
||||
if username_link:
|
||||
href = username_link.get_attribute("href") or ""
|
||||
match = re.match(r"/@([^/]+)", href)
|
||||
username = match.group(1) if match else "unknown"
|
||||
|
||||
# Clean text
|
||||
_skip = {"Trả lời", "Thích", "Chia sẻ", "Repost", "...", "•"}
|
||||
lines = [
|
||||
l.strip()
|
||||
for l in text.split("\n")
|
||||
if l.strip() and len(l.strip()) > 3 and l.strip() not in _skip
|
||||
]
|
||||
clean_text = "\n".join(lines)
|
||||
if clean_text:
|
||||
replies.append({"text": clean_text, "username": username})
|
||||
except Exception:
|
||||
continue
|
||||
return replies
|
||||
|
||||
|
||||
def _create_browser_context(playwright):
|
||||
"""Create a Playwright browser and context with shared settings."""
|
||||
browser = playwright.chromium.launch(headless=True)
|
||||
context = browser.new_context(
|
||||
viewport=_BROWSER_VIEWPORT,
|
||||
user_agent=_BROWSER_USER_AGENT,
|
||||
locale=_BROWSER_LOCALE,
|
||||
)
|
||||
return browser, context
|
||||
|
||||
|
||||
def _scroll_page(page: Page, times: int = _TOPIC_SCROLL_ITERATIONS) -> None:
|
||||
"""Scroll down to trigger lazy-loading content."""
|
||||
for _ in range(times):
|
||||
page.evaluate("window.scrollBy(0, window.innerHeight)")
|
||||
page.wait_for_timeout(1000)
|
||||
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Public API
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
|
||||
def get_trending_threads(
|
||||
max_topics: int = 5,
|
||||
max_threads_per_topic: int = 10,
|
||||
) -> List[Dict[str, str]]:
|
||||
"""Lấy danh sách threads từ các trending topics trên Threads.
|
||||
|
||||
Mở một phiên Playwright duy nhất, duyệt qua trending topics
|
||||
và trích xuất bài viết từ mỗi topic.
|
||||
|
||||
Args:
|
||||
max_topics: Số trending topics tối đa cần duyệt.
|
||||
max_threads_per_topic: Số bài viết tối đa từ mỗi topic.
|
||||
|
||||
Returns:
|
||||
Danh sách thread dicts: ``{text, username, permalink, shortcode, topic_title}``.
|
||||
|
||||
Raises:
|
||||
TrendingScrapeError: Nếu không thể scrape trending.
|
||||
"""
|
||||
print_step("🔥 Đang lấy bài viết từ Trending now trên Threads...")
|
||||
|
||||
all_threads: List[Dict[str, str]] = []
|
||||
|
||||
with sync_playwright() as p:
|
||||
browser, context = _create_browser_context(p)
|
||||
page = context.new_page()
|
||||
|
||||
try:
|
||||
# Step 1: Navigate to search page
|
||||
page.goto(THREADS_SEARCH_URL, timeout=_PAGE_LOAD_TIMEOUT_MS)
|
||||
page.wait_for_load_state(
|
||||
"domcontentloaded", timeout=_PAGE_LOAD_TIMEOUT_MS
|
||||
)
|
||||
page.wait_for_timeout(_CONTENT_WAIT_MS)
|
||||
|
||||
# Step 2: Extract trending topics
|
||||
topics = _extract_topic_links(page, limit=max_topics)
|
||||
if not topics:
|
||||
raise TrendingScrapeError(
|
||||
"Không tìm thấy trending topics trên Threads. "
|
||||
"Có thể Threads đã thay đổi giao diện hoặc yêu cầu đăng nhập."
|
||||
)
|
||||
|
||||
topic_names = ", ".join(t["title"][:30] for t in topics[:3])
|
||||
suffix = "..." if len(topics) > 3 else ""
|
||||
print_substep(
|
||||
f"🔥 Tìm thấy {len(topics)} trending topics: {topic_names}{suffix}",
|
||||
style="bold blue",
|
||||
)
|
||||
|
||||
# Step 3: Visit each topic and extract threads
|
||||
for topic in topics:
|
||||
try:
|
||||
page.goto(topic["url"], timeout=_PAGE_LOAD_TIMEOUT_MS)
|
||||
page.wait_for_load_state(
|
||||
"domcontentloaded", timeout=_PAGE_LOAD_TIMEOUT_MS
|
||||
)
|
||||
page.wait_for_timeout(_CONTENT_WAIT_MS)
|
||||
_scroll_page(page, times=2)
|
||||
|
||||
threads = _extract_post_links(
|
||||
page, limit=max_threads_per_topic
|
||||
)
|
||||
for t in threads:
|
||||
t["topic_title"] = topic["title"]
|
||||
all_threads.extend(threads)
|
||||
|
||||
print_substep(
|
||||
f" 📝 Topic '{topic['title'][:30]}': "
|
||||
f"{len(threads)} bài viết",
|
||||
style="bold blue",
|
||||
)
|
||||
except PlaywrightTimeoutError:
|
||||
print_substep(
|
||||
f" ⚠️ Timeout topic '{topic['title'][:30]}'",
|
||||
style="bold yellow",
|
||||
)
|
||||
except Exception as exc:
|
||||
print_substep(
|
||||
f" ⚠️ Lỗi topic '{topic['title'][:30]}': {exc}",
|
||||
style="bold yellow",
|
||||
)
|
||||
|
||||
except TrendingScrapeError:
|
||||
raise
|
||||
except PlaywrightTimeoutError as exc:
|
||||
raise TrendingScrapeError(
|
||||
"Timeout khi tải trang Threads. Kiểm tra kết nối mạng."
|
||||
) from exc
|
||||
except Exception as exc:
|
||||
raise TrendingScrapeError(
|
||||
f"Lỗi khi scrape trending: {exc}"
|
||||
) from exc
|
||||
finally:
|
||||
browser.close()
|
||||
|
||||
print_substep(
|
||||
f"✅ Tổng cộng {len(all_threads)} bài viết từ trending",
|
||||
style="bold green",
|
||||
)
|
||||
return all_threads
|
||||
|
||||
|
||||
def scrape_thread_replies(
|
||||
thread_url: str, limit: int = 50
|
||||
) -> List[Dict[str, str]]:
|
||||
"""Lấy replies của một thread bằng cách scrape trang web.
|
||||
|
||||
Sử dụng khi không thể dùng Threads API chính thức
|
||||
(ví dụ thread không thuộc user đã xác thực).
|
||||
|
||||
Args:
|
||||
thread_url: URL của thread trên Threads.
|
||||
limit: Số replies tối đa.
|
||||
|
||||
Returns:
|
||||
Danh sách reply dicts: ``{text, username}``.
|
||||
"""
|
||||
print_substep(f"💬 Đang lấy replies từ: {thread_url[:60]}...")
|
||||
|
||||
replies: List[Dict[str, str]] = []
|
||||
|
||||
with sync_playwright() as p:
|
||||
browser, context = _create_browser_context(p)
|
||||
page = context.new_page()
|
||||
|
||||
try:
|
||||
page.goto(thread_url, timeout=_PAGE_LOAD_TIMEOUT_MS)
|
||||
page.wait_for_load_state(
|
||||
"domcontentloaded", timeout=_PAGE_LOAD_TIMEOUT_MS
|
||||
)
|
||||
page.wait_for_timeout(_CONTENT_WAIT_MS)
|
||||
|
||||
replies = _extract_replies(page, limit=limit)
|
||||
except PlaywrightTimeoutError:
|
||||
print_substep(
|
||||
"⚠️ Timeout khi tải thread", style="bold yellow"
|
||||
)
|
||||
except Exception as exc:
|
||||
print_substep(
|
||||
f"⚠️ Lỗi lấy replies: {exc}", style="bold yellow"
|
||||
)
|
||||
finally:
|
||||
browser.close()
|
||||
|
||||
print_substep(f"💬 Đã lấy {len(replies)} replies", style="bold blue")
|
||||
return replies
|
||||
@ -0,0 +1,129 @@
|
||||
"""
|
||||
Base Uploader - Lớp cơ sở cho tất cả uploaders.
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
from abc import ABC, abstractmethod
|
||||
from dataclasses import dataclass, field
|
||||
from typing import List, Optional
|
||||
|
||||
from utils.console import print_step, print_substep
|
||||
|
||||
|
||||
@dataclass
|
||||
class VideoMetadata:
|
||||
"""Metadata cho video cần upload."""
|
||||
|
||||
file_path: str
|
||||
title: str
|
||||
description: str = ""
|
||||
tags: List[str] = field(default_factory=list)
|
||||
hashtags: List[str] = field(default_factory=list)
|
||||
thumbnail_path: Optional[str] = None
|
||||
schedule_time: Optional[str] = None # ISO 8601 format
|
||||
privacy: str = "public" # public, private, unlisted
|
||||
category: str = "Entertainment"
|
||||
language: str = "vi" # Vietnamese
|
||||
|
||||
|
||||
class BaseUploader(ABC):
|
||||
"""Lớp cơ sở cho tất cả platform uploaders."""
|
||||
|
||||
platform_name: str = "Unknown"
|
||||
|
||||
def __init__(self):
|
||||
self._authenticated = False
|
||||
|
||||
@abstractmethod
|
||||
def authenticate(self) -> bool:
|
||||
"""Xác thực với platform API.
|
||||
|
||||
Returns:
|
||||
True nếu xác thực thành công.
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def upload(self, metadata: VideoMetadata) -> Optional[str]:
|
||||
"""Upload video lên platform.
|
||||
|
||||
Args:
|
||||
metadata: VideoMetadata chứa thông tin video.
|
||||
|
||||
Returns:
|
||||
URL của video đã upload, hoặc None nếu thất bại.
|
||||
"""
|
||||
pass
|
||||
|
||||
def validate_video(self, metadata: VideoMetadata) -> bool:
|
||||
"""Kiểm tra video có hợp lệ trước khi upload.
|
||||
|
||||
Args:
|
||||
metadata: VideoMetadata cần kiểm tra.
|
||||
|
||||
Returns:
|
||||
True nếu hợp lệ.
|
||||
"""
|
||||
if not os.path.exists(metadata.file_path):
|
||||
print_substep(
|
||||
f"[{self.platform_name}] File không tồn tại: {metadata.file_path}", style="bold red"
|
||||
)
|
||||
return False
|
||||
|
||||
file_size = os.path.getsize(metadata.file_path)
|
||||
if file_size == 0:
|
||||
print_substep(
|
||||
f"[{self.platform_name}] File rỗng: {metadata.file_path}", style="bold red"
|
||||
)
|
||||
return False
|
||||
|
||||
if not metadata.title:
|
||||
print_substep(f"[{self.platform_name}] Thiếu tiêu đề video", style="bold red")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def safe_upload(self, metadata: VideoMetadata, max_retries: int = 3) -> Optional[str]:
|
||||
"""Upload video với retry logic.
|
||||
|
||||
Args:
|
||||
metadata: VideoMetadata chứa thông tin video.
|
||||
max_retries: Số lần thử lại tối đa.
|
||||
|
||||
Returns:
|
||||
URL của video đã upload, hoặc None nếu thất bại.
|
||||
"""
|
||||
if not self.validate_video(metadata):
|
||||
return None
|
||||
|
||||
if not self._authenticated:
|
||||
print_step(f"Đang xác thực với {self.platform_name}...")
|
||||
if not self.authenticate():
|
||||
print_substep(f"Xác thực {self.platform_name} thất bại!", style="bold red")
|
||||
return None
|
||||
|
||||
for attempt in range(1, max_retries + 1):
|
||||
try:
|
||||
print_step(f"Đang upload lên {self.platform_name} (lần {attempt}/{max_retries})...")
|
||||
url = self.upload(metadata)
|
||||
if url:
|
||||
print_substep(
|
||||
f"Upload {self.platform_name} thành công! URL: {url}",
|
||||
style="bold green",
|
||||
)
|
||||
return url
|
||||
except Exception as e:
|
||||
print_substep(
|
||||
f"[{self.platform_name}] Lỗi upload (lần {attempt}): {e}",
|
||||
style="bold red",
|
||||
)
|
||||
if attempt < max_retries:
|
||||
backoff = min(2**attempt, 60) # Exponential backoff, max 60s
|
||||
print_substep(f"Chờ {backoff}s trước khi thử lại...", style="bold yellow")
|
||||
time.sleep(backoff)
|
||||
|
||||
print_substep(
|
||||
f"Upload {self.platform_name} thất bại sau {max_retries} lần thử!", style="bold red"
|
||||
)
|
||||
return None
|
||||
@ -0,0 +1,218 @@
|
||||
"""
|
||||
Facebook Uploader - Upload video lên Facebook sử dụng Graph API.
|
||||
|
||||
Yêu cầu:
|
||||
- Facebook Developer App
|
||||
- Page Access Token (cho Page upload) hoặc User Access Token
|
||||
- Permissions: publish_video, pages_manage_posts
|
||||
Docs: https://developers.facebook.com/docs/video-api/guides/publishing
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
|
||||
from uploaders.base_uploader import BaseUploader, VideoMetadata
|
||||
from utils import settings
|
||||
from utils.console import print_substep
|
||||
|
||||
|
||||
class FacebookUploader(BaseUploader):
|
||||
"""Upload video lên Facebook Page/Profile."""
|
||||
|
||||
platform_name = "Facebook"
|
||||
|
||||
# Facebook API endpoints
|
||||
GRAPH_API_BASE = "https://graph.facebook.com/v21.0"
|
||||
|
||||
# Limits
|
||||
MAX_DESCRIPTION_LENGTH = 63206
|
||||
MAX_TITLE_LENGTH = 255
|
||||
MAX_FILE_SIZE = 10 * 1024 * 1024 * 1024 # 10 GB
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.config = settings.config.get("uploaders", {}).get("facebook", {})
|
||||
self.access_token = None
|
||||
self.page_id = None
|
||||
|
||||
def authenticate(self) -> bool:
|
||||
"""Xác thực với Facebook Graph API.
|
||||
|
||||
Sử dụng Page Access Token cho upload lên Page.
|
||||
|
||||
Returns:
|
||||
True nếu xác thực thành công.
|
||||
"""
|
||||
self.access_token = self.config.get("access_token", "")
|
||||
self.page_id = self.config.get("page_id", "")
|
||||
|
||||
if not self.access_token:
|
||||
print_substep("Facebook: Thiếu access_token", style="bold red")
|
||||
return False
|
||||
|
||||
if not self.page_id:
|
||||
print_substep("Facebook: Thiếu page_id", style="bold red")
|
||||
return False
|
||||
|
||||
# Verify token
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{self.GRAPH_API_BASE}/me",
|
||||
params={"access_token": self.access_token},
|
||||
timeout=15,
|
||||
)
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
if "id" in data:
|
||||
self._authenticated = True
|
||||
print_substep(
|
||||
f"Facebook: Xác thực thành công (Page: {data.get('name', self.page_id)}) ✅",
|
||||
style="bold green",
|
||||
)
|
||||
return True
|
||||
else:
|
||||
print_substep("Facebook: Token không hợp lệ", style="bold red")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"Facebook: Lỗi xác thực - {e}", style="bold red")
|
||||
return False
|
||||
|
||||
def upload(self, metadata: VideoMetadata) -> Optional[str]:
|
||||
"""Upload video lên Facebook Page.
|
||||
|
||||
Sử dụng Resumable Upload API cho file lớn.
|
||||
|
||||
Args:
|
||||
metadata: VideoMetadata chứa thông tin video.
|
||||
|
||||
Returns:
|
||||
URL video trên Facebook, hoặc None nếu thất bại.
|
||||
"""
|
||||
if not self.access_token or not self.page_id:
|
||||
return None
|
||||
|
||||
file_size = os.path.getsize(metadata.file_path)
|
||||
|
||||
title = metadata.title[: self.MAX_TITLE_LENGTH]
|
||||
description = self._build_description(metadata)
|
||||
|
||||
# Step 1: Initialize upload session
|
||||
try:
|
||||
init_response = requests.post(
|
||||
f"{self.GRAPH_API_BASE}/{self.page_id}/videos",
|
||||
data={
|
||||
"upload_phase": "start",
|
||||
"file_size": file_size,
|
||||
"access_token": self.access_token,
|
||||
},
|
||||
timeout=30,
|
||||
)
|
||||
init_response.raise_for_status()
|
||||
init_data = init_response.json()
|
||||
|
||||
upload_session_id = init_data.get("upload_session_id", "")
|
||||
video_id = init_data.get("video_id", "")
|
||||
|
||||
if not upload_session_id:
|
||||
print_substep("Facebook: Không thể khởi tạo upload session", style="bold red")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"Facebook: Lỗi khởi tạo upload - {e}", style="bold red")
|
||||
return None
|
||||
|
||||
# Step 2: Upload video chunks
|
||||
try:
|
||||
chunk_size = 4 * 1024 * 1024 # 4 MB chunks
|
||||
start_offset = 0
|
||||
|
||||
with open(metadata.file_path, "rb") as video_file:
|
||||
while start_offset < file_size:
|
||||
chunk = video_file.read(chunk_size)
|
||||
transfer_response = requests.post(
|
||||
f"{self.GRAPH_API_BASE}/{self.page_id}/videos",
|
||||
data={
|
||||
"upload_phase": "transfer",
|
||||
"upload_session_id": upload_session_id,
|
||||
"start_offset": start_offset,
|
||||
"access_token": self.access_token,
|
||||
},
|
||||
files={"video_file_chunk": ("chunk", chunk, "application/octet-stream")},
|
||||
timeout=120,
|
||||
)
|
||||
transfer_response.raise_for_status()
|
||||
transfer_data = transfer_response.json()
|
||||
|
||||
start_offset = int(transfer_data.get("start_offset", file_size))
|
||||
end_offset = int(transfer_data.get("end_offset", file_size))
|
||||
|
||||
if start_offset >= file_size:
|
||||
break
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"Facebook: Lỗi upload file - {e}", style="bold red")
|
||||
return None
|
||||
|
||||
# Step 3: Finish upload
|
||||
try:
|
||||
finish_data = {
|
||||
"upload_phase": "finish",
|
||||
"upload_session_id": upload_session_id,
|
||||
"access_token": self.access_token,
|
||||
"title": title,
|
||||
"description": description[: self.MAX_DESCRIPTION_LENGTH],
|
||||
}
|
||||
|
||||
if metadata.schedule_time:
|
||||
finish_data["scheduled_publish_time"] = metadata.schedule_time
|
||||
finish_data["published"] = "false"
|
||||
|
||||
if metadata.thumbnail_path and os.path.exists(metadata.thumbnail_path):
|
||||
with open(metadata.thumbnail_path, "rb") as thumb:
|
||||
finish_response = requests.post(
|
||||
f"{self.GRAPH_API_BASE}/{self.page_id}/videos",
|
||||
data=finish_data,
|
||||
files={"thumb": thumb},
|
||||
timeout=60,
|
||||
)
|
||||
else:
|
||||
finish_response = requests.post(
|
||||
f"{self.GRAPH_API_BASE}/{self.page_id}/videos",
|
||||
data=finish_data,
|
||||
timeout=60,
|
||||
)
|
||||
|
||||
finish_response.raise_for_status()
|
||||
finish_result = finish_response.json()
|
||||
|
||||
if finish_result.get("success", False):
|
||||
video_url = f"https://www.facebook.com/{self.page_id}/videos/{video_id}"
|
||||
return video_url
|
||||
else:
|
||||
print_substep("Facebook: Upload hoàn tất nhưng không thành công", style="bold red")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"Facebook: Lỗi kết thúc upload - {e}", style="bold red")
|
||||
return None
|
||||
|
||||
def _build_description(self, metadata: VideoMetadata) -> str:
|
||||
"""Tạo description cho video Facebook."""
|
||||
parts = []
|
||||
if metadata.description:
|
||||
parts.append(metadata.description)
|
||||
|
||||
if metadata.hashtags:
|
||||
hashtag_str = " ".join(f"#{tag}" for tag in metadata.hashtags)
|
||||
parts.append(hashtag_str)
|
||||
|
||||
parts.append("")
|
||||
parts.append("🎬 Video được tạo tự động bởi Threads Video Maker Bot")
|
||||
|
||||
return "\n".join(parts)
|
||||
@ -0,0 +1,227 @@
|
||||
"""
|
||||
TikTok Uploader - Upload video lên TikTok sử dụng Content Posting API.
|
||||
|
||||
Yêu cầu:
|
||||
- TikTok Developer App
|
||||
- Content Posting API access
|
||||
- OAuth2 access token
|
||||
Docs: https://developers.tiktok.com/doc/content-posting-api-get-started
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
|
||||
from uploaders.base_uploader import BaseUploader, VideoMetadata
|
||||
from utils import settings
|
||||
from utils.console import print_substep
|
||||
|
||||
|
||||
class TikTokUploader(BaseUploader):
|
||||
"""Upload video lên TikTok."""
|
||||
|
||||
platform_name = "TikTok"
|
||||
|
||||
# TikTok API endpoints
|
||||
API_BASE = "https://open.tiktokapis.com/v2"
|
||||
TOKEN_URL = "https://open.tiktokapis.com/v2/oauth/token/"
|
||||
|
||||
# Limits
|
||||
MAX_CAPTION_LENGTH = 2200
|
||||
MAX_FILE_SIZE = 4 * 1024 * 1024 * 1024 # 4 GB
|
||||
MIN_DURATION = 3 # seconds
|
||||
MAX_DURATION = 600 # 10 minutes
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.config = settings.config.get("uploaders", {}).get("tiktok", {})
|
||||
self.access_token = None
|
||||
|
||||
def authenticate(self) -> bool:
|
||||
"""Xác thực với TikTok API sử dụng refresh token.
|
||||
|
||||
Returns:
|
||||
True nếu xác thực thành công.
|
||||
"""
|
||||
client_key = self.config.get("client_key", "")
|
||||
client_secret = self.config.get("client_secret", "")
|
||||
refresh_token = self.config.get("refresh_token", "")
|
||||
|
||||
if not all([client_key, client_secret, refresh_token]):
|
||||
print_substep(
|
||||
"TikTok: Thiếu credentials (client_key, client_secret, refresh_token)",
|
||||
style="bold red",
|
||||
)
|
||||
return False
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
self.TOKEN_URL,
|
||||
json={
|
||||
"client_key": client_key,
|
||||
"client_secret": client_secret,
|
||||
"grant_type": "refresh_token",
|
||||
"refresh_token": refresh_token,
|
||||
},
|
||||
timeout=30,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self.access_token = token_data.get("data", {}).get("access_token", "")
|
||||
|
||||
if self.access_token:
|
||||
self._authenticated = True
|
||||
print_substep("TikTok: Xác thực thành công! ✅", style="bold green")
|
||||
return True
|
||||
else:
|
||||
print_substep("TikTok: Không lấy được access token", style="bold red")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"TikTok: Lỗi xác thực - {e}", style="bold red")
|
||||
return False
|
||||
|
||||
def upload(self, metadata: VideoMetadata) -> Optional[str]:
|
||||
"""Upload video lên TikTok sử dụng Content Posting API.
|
||||
|
||||
Flow:
|
||||
1. Initialize upload → get upload_url
|
||||
2. Upload video file to upload_url
|
||||
3. Publish video
|
||||
|
||||
Args:
|
||||
metadata: VideoMetadata chứa thông tin video.
|
||||
|
||||
Returns:
|
||||
URL video trên TikTok, hoặc None nếu thất bại.
|
||||
"""
|
||||
if not self.access_token:
|
||||
return None
|
||||
|
||||
file_size = os.path.getsize(metadata.file_path)
|
||||
if file_size > self.MAX_FILE_SIZE:
|
||||
print_substep(f"TikTok: File quá lớn ({file_size} bytes)", style="bold red")
|
||||
return None
|
||||
|
||||
# Build caption
|
||||
caption = self._build_caption(metadata)
|
||||
|
||||
# Step 1: Initialize upload
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.access_token}",
|
||||
"Content-Type": "application/json; charset=UTF-8",
|
||||
}
|
||||
|
||||
init_body = {
|
||||
"post_info": {
|
||||
"title": caption,
|
||||
"privacy_level": self._map_privacy(metadata.privacy),
|
||||
"disable_duet": False,
|
||||
"disable_comment": False,
|
||||
"disable_stitch": False,
|
||||
},
|
||||
"source_info": {
|
||||
"source": "FILE_UPLOAD",
|
||||
"video_size": file_size,
|
||||
"chunk_size": file_size, # Single chunk upload
|
||||
"total_chunk_count": 1,
|
||||
},
|
||||
}
|
||||
|
||||
if metadata.schedule_time:
|
||||
init_body["post_info"]["schedule_time"] = metadata.schedule_time
|
||||
|
||||
try:
|
||||
init_response = requests.post(
|
||||
f"{self.API_BASE}/post/publish/inbox/video/init/",
|
||||
headers=headers,
|
||||
json=init_body,
|
||||
timeout=30,
|
||||
)
|
||||
init_response.raise_for_status()
|
||||
init_data = init_response.json()
|
||||
|
||||
publish_id = init_data.get("data", {}).get("publish_id", "")
|
||||
upload_url = init_data.get("data", {}).get("upload_url", "")
|
||||
|
||||
if not upload_url:
|
||||
print_substep("TikTok: Không lấy được upload URL", style="bold red")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"TikTok: Lỗi khởi tạo upload - {e}", style="bold red")
|
||||
return None
|
||||
|
||||
# Step 2: Upload video file
|
||||
try:
|
||||
with open(metadata.file_path, "rb") as video_file:
|
||||
upload_headers = {
|
||||
"Content-Type": "video/mp4",
|
||||
"Content-Length": str(file_size),
|
||||
"Content-Range": f"bytes 0-{file_size - 1}/{file_size}",
|
||||
}
|
||||
upload_response = requests.put(
|
||||
upload_url,
|
||||
headers=upload_headers,
|
||||
data=video_file,
|
||||
timeout=600,
|
||||
)
|
||||
upload_response.raise_for_status()
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"TikTok: Lỗi upload file - {e}", style="bold red")
|
||||
return None
|
||||
|
||||
# Step 3: Check publish status
|
||||
status_url = f"{self.API_BASE}/post/publish/status/fetch/"
|
||||
for attempt in range(10):
|
||||
try:
|
||||
status_response = requests.post(
|
||||
status_url,
|
||||
headers=headers,
|
||||
json={"publish_id": publish_id},
|
||||
timeout=30,
|
||||
)
|
||||
status_data = status_response.json()
|
||||
status = status_data.get("data", {}).get("status", "")
|
||||
|
||||
if status == "PUBLISH_COMPLETE":
|
||||
print_substep("TikTok: Upload thành công! ✅", style="bold green")
|
||||
return f"https://www.tiktok.com/@user/video/{publish_id}"
|
||||
elif status == "FAILED":
|
||||
reason = status_data.get("data", {}).get("fail_reason", "Unknown")
|
||||
print_substep(f"TikTok: Upload thất bại - {reason}", style="bold red")
|
||||
return None
|
||||
|
||||
time.sleep(5) # Wait 5 seconds before checking again
|
||||
except Exception:
|
||||
time.sleep(5)
|
||||
|
||||
print_substep("TikTok: Upload timeout", style="bold yellow")
|
||||
return None
|
||||
|
||||
def _build_caption(self, metadata: VideoMetadata) -> str:
|
||||
"""Tạo caption cho video TikTok."""
|
||||
parts = []
|
||||
if metadata.title:
|
||||
parts.append(metadata.title)
|
||||
if metadata.hashtags:
|
||||
hashtag_str = " ".join(f"#{tag}" for tag in metadata.hashtags)
|
||||
parts.append(hashtag_str)
|
||||
caption = " ".join(parts)
|
||||
return caption[: self.MAX_CAPTION_LENGTH]
|
||||
|
||||
@staticmethod
|
||||
def _map_privacy(privacy: str) -> str:
|
||||
"""Map privacy setting to TikTok format."""
|
||||
mapping = {
|
||||
"public": "PUBLIC_TO_EVERYONE",
|
||||
"private": "SELF_ONLY",
|
||||
"friends": "MUTUAL_FOLLOW_FRIENDS",
|
||||
"unlisted": "SELF_ONLY",
|
||||
}
|
||||
return mapping.get(privacy, "PUBLIC_TO_EVERYONE")
|
||||
@ -0,0 +1,137 @@
|
||||
"""
|
||||
Upload Manager - Quản lý upload video lên nhiều platform cùng lúc.
|
||||
"""
|
||||
|
||||
import os
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from uploaders.base_uploader import BaseUploader, VideoMetadata
|
||||
from uploaders.facebook_uploader import FacebookUploader
|
||||
from uploaders.tiktok_uploader import TikTokUploader
|
||||
from uploaders.youtube_uploader import YouTubeUploader
|
||||
from utils import settings
|
||||
from utils.console import print_step, print_substep
|
||||
|
||||
|
||||
class UploadManager:
|
||||
"""Quản lý upload video lên nhiều platform."""
|
||||
|
||||
PLATFORM_MAP = {
|
||||
"youtube": YouTubeUploader,
|
||||
"tiktok": TikTokUploader,
|
||||
"facebook": FacebookUploader,
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
self.uploaders: Dict[str, BaseUploader] = {}
|
||||
self._init_uploaders()
|
||||
|
||||
def _init_uploaders(self):
|
||||
"""Khởi tạo uploaders dựa trên cấu hình."""
|
||||
upload_config = settings.config.get("uploaders", {})
|
||||
|
||||
for platform_name, uploader_class in self.PLATFORM_MAP.items():
|
||||
platform_config = upload_config.get(platform_name, {})
|
||||
if platform_config.get("enabled", False):
|
||||
self.uploaders[platform_name] = uploader_class()
|
||||
print_substep(f"Đã kích hoạt uploader: {platform_name}", style="bold blue")
|
||||
|
||||
def upload_to_all(
|
||||
self,
|
||||
video_path: str,
|
||||
title: str,
|
||||
description: str = "",
|
||||
tags: Optional[List[str]] = None,
|
||||
hashtags: Optional[List[str]] = None,
|
||||
thumbnail_path: Optional[str] = None,
|
||||
schedule_time: Optional[str] = None,
|
||||
privacy: str = "public",
|
||||
) -> Dict[str, Optional[str]]:
|
||||
"""Upload video lên tất cả platform đã cấu hình.
|
||||
|
||||
Args:
|
||||
video_path: Đường dẫn file video.
|
||||
title: Tiêu đề video.
|
||||
description: Mô tả video.
|
||||
tags: Danh sách tags.
|
||||
hashtags: Danh sách hashtags.
|
||||
thumbnail_path: Đường dẫn thumbnail.
|
||||
schedule_time: Thời gian lên lịch (ISO 8601).
|
||||
privacy: Quyền riêng tư (public/private/unlisted).
|
||||
|
||||
Returns:
|
||||
Dict mapping platform name -> video URL (hoặc None nếu thất bại).
|
||||
"""
|
||||
if not self.uploaders:
|
||||
print_substep("Không có uploader nào được kích hoạt!", style="bold yellow")
|
||||
return {}
|
||||
|
||||
metadata = VideoMetadata(
|
||||
file_path=video_path,
|
||||
title=title,
|
||||
description=description,
|
||||
tags=tags or [],
|
||||
hashtags=hashtags or self._default_hashtags(),
|
||||
thumbnail_path=thumbnail_path,
|
||||
schedule_time=schedule_time,
|
||||
privacy=privacy,
|
||||
language="vi",
|
||||
)
|
||||
|
||||
results = {}
|
||||
print_step(f"Đang upload video lên {len(self.uploaders)} platform...")
|
||||
|
||||
for platform_name, uploader in self.uploaders.items():
|
||||
print_step(f"📤 Đang upload lên {platform_name}...")
|
||||
url = uploader.safe_upload(metadata)
|
||||
results[platform_name] = url
|
||||
|
||||
# Summary
|
||||
print_step("📊 Kết quả upload:")
|
||||
success_count = 0
|
||||
for platform, url in results.items():
|
||||
if url:
|
||||
print_substep(f" ✅ {platform}: {url}", style="bold green")
|
||||
success_count += 1
|
||||
else:
|
||||
print_substep(f" ❌ {platform}: Thất bại", style="bold red")
|
||||
|
||||
print_substep(
|
||||
f"Upload hoàn tất: {success_count}/{len(results)} platform thành công",
|
||||
style="bold blue",
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
def upload_to_platform(
|
||||
self,
|
||||
platform_name: str,
|
||||
metadata: VideoMetadata,
|
||||
) -> Optional[str]:
|
||||
"""Upload video lên một platform cụ thể.
|
||||
|
||||
Args:
|
||||
platform_name: Tên platform.
|
||||
metadata: VideoMetadata chứa thông tin video.
|
||||
|
||||
Returns:
|
||||
URL video, hoặc None nếu thất bại.
|
||||
"""
|
||||
if platform_name not in self.uploaders:
|
||||
print_substep(f"Platform '{platform_name}' chưa được kích hoạt!", style="bold red")
|
||||
return None
|
||||
|
||||
return self.uploaders[platform_name].safe_upload(metadata)
|
||||
|
||||
@staticmethod
|
||||
def _default_hashtags() -> List[str]:
|
||||
"""Hashtags mặc định cho thị trường Việt Nam."""
|
||||
return [
|
||||
"threads",
|
||||
"viral",
|
||||
"vietnam",
|
||||
"trending",
|
||||
"foryou",
|
||||
"fyp",
|
||||
"threadsvn",
|
||||
]
|
||||
@ -0,0 +1,227 @@
|
||||
"""
|
||||
YouTube Uploader - Upload video lên YouTube sử dụng YouTube Data API v3.
|
||||
|
||||
Yêu cầu:
|
||||
- Google API credentials (OAuth2)
|
||||
- YouTube Data API v3 enabled
|
||||
- Scopes: https://www.googleapis.com/auth/youtube.upload
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
|
||||
from uploaders.base_uploader import BaseUploader, VideoMetadata
|
||||
from utils import settings
|
||||
from utils.console import print_substep
|
||||
|
||||
|
||||
class YouTubeUploader(BaseUploader):
|
||||
"""Upload video lên YouTube."""
|
||||
|
||||
platform_name = "YouTube"
|
||||
|
||||
# YouTube API endpoints
|
||||
UPLOAD_URL = "https://www.googleapis.com/upload/youtube/v3/videos"
|
||||
VIDEOS_URL = "https://www.googleapis.com/youtube/v3/videos"
|
||||
TOKEN_URL = "https://oauth2.googleapis.com/token"
|
||||
|
||||
# Limits
|
||||
MAX_TITLE_LENGTH = 100
|
||||
MAX_DESCRIPTION_LENGTH = 5000
|
||||
MAX_TAGS = 500 # Total characters for all tags
|
||||
MAX_FILE_SIZE = 256 * 1024 * 1024 * 1024 # 256 GB
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.config = settings.config.get("uploaders", {}).get("youtube", {})
|
||||
self.access_token = None
|
||||
|
||||
def authenticate(self) -> bool:
|
||||
"""Xác thực với YouTube API sử dụng refresh token.
|
||||
|
||||
Cấu hình cần có:
|
||||
- client_id
|
||||
- client_secret
|
||||
- refresh_token (lấy từ OAuth2 flow)
|
||||
|
||||
Returns:
|
||||
True nếu xác thực thành công.
|
||||
"""
|
||||
client_id = self.config.get("client_id", "")
|
||||
client_secret = self.config.get("client_secret", "")
|
||||
refresh_token = self.config.get("refresh_token", "")
|
||||
|
||||
if not all([client_id, client_secret, refresh_token]):
|
||||
print_substep(
|
||||
"YouTube: Thiếu credentials (client_id, client_secret, refresh_token)",
|
||||
style="bold red",
|
||||
)
|
||||
return False
|
||||
|
||||
try:
|
||||
response = requests.post(
|
||||
self.TOKEN_URL,
|
||||
data={
|
||||
"client_id": client_id,
|
||||
"client_secret": client_secret,
|
||||
"refresh_token": refresh_token,
|
||||
"grant_type": "refresh_token",
|
||||
},
|
||||
timeout=30,
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
self.access_token = token_data["access_token"]
|
||||
self._authenticated = True
|
||||
print_substep("YouTube: Xác thực thành công! ✅", style="bold green")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print_substep(f"YouTube: Lỗi xác thực - {e}", style="bold red")
|
||||
return False
|
||||
|
||||
def upload(self, metadata: VideoMetadata) -> Optional[str]:
|
||||
"""Upload video lên YouTube.
|
||||
|
||||
Args:
|
||||
metadata: VideoMetadata chứa thông tin video.
|
||||
|
||||
Returns:
|
||||
URL video trên YouTube, hoặc None nếu thất bại.
|
||||
"""
|
||||
if not self.access_token:
|
||||
return None
|
||||
|
||||
title = metadata.title[: self.MAX_TITLE_LENGTH]
|
||||
description = self._build_description(metadata)
|
||||
tags = metadata.tags or []
|
||||
|
||||
# Thêm hashtags vào description
|
||||
if metadata.hashtags:
|
||||
hashtag_str = " ".join(f"#{tag}" for tag in metadata.hashtags)
|
||||
description = f"{description}\n\n{hashtag_str}"
|
||||
|
||||
# Video metadata
|
||||
video_metadata = {
|
||||
"snippet": {
|
||||
"title": title,
|
||||
"description": description[: self.MAX_DESCRIPTION_LENGTH],
|
||||
"tags": tags,
|
||||
"categoryId": self._get_category_id(metadata.category),
|
||||
"defaultLanguage": metadata.language,
|
||||
"defaultAudioLanguage": metadata.language,
|
||||
},
|
||||
"status": {
|
||||
"privacyStatus": metadata.privacy,
|
||||
"selfDeclaredMadeForKids": False,
|
||||
},
|
||||
}
|
||||
|
||||
# Schedule publish time
|
||||
if metadata.schedule_time and metadata.privacy != "public":
|
||||
video_metadata["status"]["publishAt"] = metadata.schedule_time
|
||||
video_metadata["status"]["privacyStatus"] = "private"
|
||||
|
||||
headers = {
|
||||
"Authorization": f"Bearer {self.access_token}",
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
|
||||
# Step 1: Initiate resumable upload
|
||||
params = {
|
||||
"uploadType": "resumable",
|
||||
"part": "snippet,status",
|
||||
}
|
||||
|
||||
init_response = requests.post(
|
||||
self.UPLOAD_URL,
|
||||
headers=headers,
|
||||
params=params,
|
||||
json=video_metadata,
|
||||
timeout=30,
|
||||
)
|
||||
init_response.raise_for_status()
|
||||
|
||||
upload_url = init_response.headers.get("Location")
|
||||
if not upload_url:
|
||||
print_substep("YouTube: Không thể khởi tạo upload session", style="bold red")
|
||||
return None
|
||||
|
||||
# Step 2: Upload video file
|
||||
file_size = os.path.getsize(metadata.file_path)
|
||||
# Dynamic timeout: minimum 120s, add 60s per 100MB
|
||||
upload_timeout = max(120, 60 * (file_size // (100 * 1024 * 1024) + 1))
|
||||
with open(metadata.file_path, "rb") as video_file:
|
||||
upload_response = requests.put(
|
||||
upload_url,
|
||||
headers={
|
||||
"Content-Type": "video/mp4",
|
||||
"Content-Length": str(file_size),
|
||||
},
|
||||
data=video_file,
|
||||
timeout=upload_timeout,
|
||||
)
|
||||
upload_response.raise_for_status()
|
||||
|
||||
video_data = upload_response.json()
|
||||
video_id = video_data.get("id", "")
|
||||
|
||||
if not video_id:
|
||||
print_substep(
|
||||
"YouTube: Upload thành công nhưng không lấy được video ID", style="bold yellow"
|
||||
)
|
||||
return None
|
||||
|
||||
# Step 3: Upload thumbnail if available
|
||||
if metadata.thumbnail_path and os.path.exists(metadata.thumbnail_path):
|
||||
self._upload_thumbnail(video_id, metadata.thumbnail_path)
|
||||
|
||||
video_url = f"https://www.youtube.com/watch?v={video_id}"
|
||||
return video_url
|
||||
|
||||
def _upload_thumbnail(self, video_id: str, thumbnail_path: str):
|
||||
"""Upload thumbnail cho video."""
|
||||
try:
|
||||
url = f"https://www.googleapis.com/upload/youtube/v3/thumbnails/set"
|
||||
with open(thumbnail_path, "rb") as thumb_file:
|
||||
response = requests.post(
|
||||
url,
|
||||
headers={"Authorization": f"Bearer {self.access_token}"},
|
||||
params={"videoId": video_id},
|
||||
files={"media": thumb_file},
|
||||
timeout=60,
|
||||
)
|
||||
response.raise_for_status()
|
||||
print_substep("YouTube: Đã upload thumbnail ✅", style="bold green")
|
||||
except Exception as e:
|
||||
print_substep(f"YouTube: Lỗi upload thumbnail - {e}", style="bold yellow")
|
||||
|
||||
def _build_description(self, metadata: VideoMetadata) -> str:
|
||||
"""Tạo description cho video YouTube."""
|
||||
parts = []
|
||||
if metadata.description:
|
||||
parts.append(metadata.description)
|
||||
parts.append("")
|
||||
parts.append("🎬 Video được tạo tự động bởi Threads Video Maker Bot")
|
||||
parts.append(f"🌐 Ngôn ngữ: Tiếng Việt")
|
||||
return "\n".join(parts)
|
||||
|
||||
@staticmethod
|
||||
def _get_category_id(category: str) -> str:
|
||||
"""Map category name to YouTube category ID."""
|
||||
categories = {
|
||||
"Entertainment": "24",
|
||||
"People & Blogs": "22",
|
||||
"Comedy": "23",
|
||||
"Education": "27",
|
||||
"Science & Technology": "28",
|
||||
"News & Politics": "25",
|
||||
"Gaming": "20",
|
||||
"Music": "10",
|
||||
}
|
||||
return categories.get(category, "24")
|
||||
@ -0,0 +1,202 @@
|
||||
"""
|
||||
Preflight Access-Token Checker — chạy trước khi pipeline bắt đầu.
|
||||
|
||||
Kiểm tra:
|
||||
1. access_token có được cấu hình trong config.toml không.
|
||||
2. Token có hợp lệ trên Threads API (/me endpoint) không.
|
||||
3. Nếu token hết hạn → tự động thử refresh.
|
||||
4. user_id trong config khớp với user sở hữu token không.
|
||||
|
||||
Usage:
|
||||
# Gọi trực tiếp:
|
||||
python -m utils.check_token
|
||||
|
||||
# Hoặc import trong code:
|
||||
from utils.check_token import preflight_check
|
||||
preflight_check() # raises SystemExit on failure
|
||||
"""
|
||||
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
import requests
|
||||
|
||||
from utils import settings
|
||||
from utils.console import print_step, print_substep
|
||||
|
||||
THREADS_API_BASE = "https://graph.threads.net/v1.0"
|
||||
_REQUEST_TIMEOUT_SECONDS = 15 # preflight should be fast
|
||||
|
||||
|
||||
class TokenCheckError(Exception):
|
||||
"""Raised when the access-token preflight fails."""
|
||||
|
||||
|
||||
def _call_me_endpoint(access_token: str) -> dict:
|
||||
"""GET /me?fields=id,username,name,threads_profile_picture_url,threads_biography
|
||||
|
||||
Sử dụng đầy đủ profile fields theo Threads Profiles API:
|
||||
https://developers.facebook.com/docs/threads/threads-profiles
|
||||
"""
|
||||
url = f"{THREADS_API_BASE}/me"
|
||||
params = {
|
||||
"fields": "id,username,name,threads_profile_picture_url,threads_biography",
|
||||
"access_token": access_token,
|
||||
}
|
||||
response = requests.get(url, params=params, timeout=_REQUEST_TIMEOUT_SECONDS)
|
||||
|
||||
# HTTP-level errors
|
||||
if response.status_code == 401:
|
||||
raise TokenCheckError(
|
||||
"Access token không hợp lệ hoặc đã hết hạn (HTTP 401).\n"
|
||||
"→ Cập nhật [threads.creds] access_token trong config.toml."
|
||||
)
|
||||
if response.status_code == 403:
|
||||
raise TokenCheckError(
|
||||
"Token thiếu quyền (HTTP 403).\n"
|
||||
"→ Đảm bảo token có quyền threads_basic_read trong Meta Developer Portal."
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
data = response.json()
|
||||
|
||||
# Graph API may return 200 with an error body
|
||||
if "error" in data:
|
||||
err = data["error"]
|
||||
msg = err.get("message", "Unknown error")
|
||||
code = err.get("code", 0)
|
||||
raise TokenCheckError(f"Threads API trả về lỗi: {msg} (code={code})")
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def _try_refresh(access_token: str) -> Optional[str]:
|
||||
"""Attempt to refresh a long-lived Threads token.
|
||||
|
||||
Returns new token string, or None if refresh is not possible.
|
||||
"""
|
||||
url = f"{THREADS_API_BASE}/refresh_access_token"
|
||||
try:
|
||||
resp = requests.get(
|
||||
url,
|
||||
params={
|
||||
"grant_type": "th_refresh_token",
|
||||
"access_token": access_token,
|
||||
},
|
||||
timeout=_REQUEST_TIMEOUT_SECONDS,
|
||||
)
|
||||
resp.raise_for_status()
|
||||
data = resp.json()
|
||||
if "error" in data:
|
||||
return None
|
||||
return data.get("access_token") or None
|
||||
except requests.RequestException:
|
||||
return None
|
||||
|
||||
|
||||
def preflight_check() -> None:
|
||||
"""Validate the Threads access token configured in *config.toml*.
|
||||
|
||||
On success, prints a confirmation and returns normally.
|
||||
On failure, prints actionable diagnostics and raises ``SystemExit(1)``.
|
||||
"""
|
||||
print_step("🔑 Kiểm tra access token trước khi chạy...")
|
||||
|
||||
# --- 1. Check config values exist -----------------------------------
|
||||
try:
|
||||
threads_creds = settings.config["threads"]["creds"]
|
||||
access_token: str = threads_creds.get("access_token", "").strip()
|
||||
user_id: str = threads_creds.get("user_id", "").strip()
|
||||
except (KeyError, TypeError):
|
||||
print_substep(
|
||||
"❌ Thiếu cấu hình [threads.creds] trong config.toml.\n"
|
||||
" Cần có access_token và user_id.",
|
||||
style="bold red",
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
if not access_token:
|
||||
print_substep(
|
||||
"❌ access_token trống trong config.toml!\n"
|
||||
" Lấy token tại: https://developers.facebook.com/docs/threads/get-started",
|
||||
style="bold red",
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
if not user_id:
|
||||
print_substep(
|
||||
"❌ user_id trống trong config.toml!\n"
|
||||
" Lấy user_id bằng cách gọi /me với access token.",
|
||||
style="bold red",
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
# --- 2. Validate token via /me endpoint -----------------------------
|
||||
try:
|
||||
me_data = _call_me_endpoint(access_token)
|
||||
except TokenCheckError as exc:
|
||||
# Token invalid → try refresh
|
||||
print_substep(
|
||||
f"⚠️ Token hiện tại không hợp lệ: {exc}\n" " Đang thử refresh token...",
|
||||
style="bold yellow",
|
||||
)
|
||||
new_token = _try_refresh(access_token)
|
||||
if new_token:
|
||||
try:
|
||||
me_data = _call_me_endpoint(new_token)
|
||||
access_token = new_token
|
||||
# Update in-memory config so downstream code uses the new token
|
||||
settings.config["threads"]["creds"]["access_token"] = new_token
|
||||
print_substep("✅ Token đã được refresh thành công!", style="bold green")
|
||||
except TokenCheckError as inner:
|
||||
print_substep(
|
||||
f"❌ Token mới sau refresh vẫn lỗi: {inner}\n"
|
||||
" Vui lòng lấy token mới từ Meta Developer Portal:\n"
|
||||
" https://developers.facebook.com/docs/threads/get-started",
|
||||
style="bold red",
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
print_substep(
|
||||
"❌ Không thể refresh token.\n"
|
||||
" Vui lòng lấy token mới từ Meta Developer Portal:\n"
|
||||
" https://developers.facebook.com/docs/threads/get-started",
|
||||
style="bold red",
|
||||
)
|
||||
sys.exit(1)
|
||||
except requests.RequestException as exc:
|
||||
print_substep(
|
||||
f"❌ Lỗi kết nối khi kiểm tra token: {exc}\n" " Kiểm tra kết nối mạng và thử lại.",
|
||||
style="bold red",
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
# --- 3. Cross-check user_id ----------------------------------------
|
||||
api_user_id = me_data.get("id", "")
|
||||
api_username = me_data.get("username", "N/A")
|
||||
|
||||
if api_user_id and api_user_id != user_id:
|
||||
print_substep(
|
||||
f"⚠️ user_id trong config ({user_id}) khác với user sở hữu token ({api_user_id}).\n"
|
||||
" Nếu bạn muốn lấy threads của chính mình, hãy cập nhật user_id trong config.toml.\n"
|
||||
" Đang tiếp tục với token hiện tại...",
|
||||
style="bold yellow",
|
||||
)
|
||||
|
||||
print_substep(
|
||||
f"✅ Access token hợp lệ — @{api_username} (ID: {api_user_id})",
|
||||
style="bold green",
|
||||
)
|
||||
|
||||
|
||||
# Allow running standalone: python -m utils.check_token
|
||||
if __name__ == "__main__":
|
||||
from pathlib import Path
|
||||
|
||||
directory = Path().absolute()
|
||||
settings.check_toml(
|
||||
f"{directory}/utils/.config.template.toml",
|
||||
f"{directory}/config.toml",
|
||||
)
|
||||
preflight_check()
|
||||
print_step("🎉 Tất cả kiểm tra đều OK — sẵn sàng chạy!")
|
||||
@ -0,0 +1,108 @@
|
||||
"""
|
||||
Title History - Lưu và kiểm tra các title đã được sử dụng để tránh trùng lặp.
|
||||
|
||||
Lưu trữ danh sách title đã tạo video vào file JSON.
|
||||
Khi chọn thread mới, kiểm tra xem title đã được sử dụng chưa.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
from typing import Optional
|
||||
|
||||
from utils.console import print_substep
|
||||
|
||||
TITLE_HISTORY_PATH = "./video_creation/data/title_history.json"
|
||||
|
||||
|
||||
def _ensure_file_exists() -> None:
|
||||
"""Tạo file title_history.json nếu chưa tồn tại."""
|
||||
os.makedirs(os.path.dirname(TITLE_HISTORY_PATH), exist_ok=True)
|
||||
if not os.path.exists(TITLE_HISTORY_PATH):
|
||||
with open(TITLE_HISTORY_PATH, "w", encoding="utf-8") as f:
|
||||
json.dump([], f)
|
||||
|
||||
|
||||
def load_title_history() -> list:
|
||||
"""Đọc danh sách title đã sử dụng.
|
||||
|
||||
Returns:
|
||||
Danh sách các dict chứa thông tin title đã dùng.
|
||||
"""
|
||||
_ensure_file_exists()
|
||||
try:
|
||||
with open(TITLE_HISTORY_PATH, "r", encoding="utf-8") as f:
|
||||
return json.load(f)
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
return []
|
||||
|
||||
|
||||
def is_title_used(title: str) -> bool:
|
||||
"""Kiểm tra xem title đã được sử dụng chưa.
|
||||
|
||||
So sánh bằng cách chuẩn hóa (lowercase, strip) để tránh trùng lặp
|
||||
do khác biệt chữ hoa/thường hoặc khoảng trắng.
|
||||
|
||||
Args:
|
||||
title: Title cần kiểm tra.
|
||||
|
||||
Returns:
|
||||
True nếu title đã được sử dụng, False nếu chưa.
|
||||
"""
|
||||
if not title or not title.strip():
|
||||
return False
|
||||
|
||||
history = load_title_history()
|
||||
normalized_title = title.strip().lower()
|
||||
|
||||
for entry in history:
|
||||
saved_title = entry.get("title", "").strip().lower()
|
||||
if saved_title == normalized_title:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def save_title(title: str, thread_id: str = "", source: str = "threads") -> None:
|
||||
"""Lưu title đã sử dụng vào lịch sử.
|
||||
|
||||
Args:
|
||||
title: Title của video đã tạo.
|
||||
thread_id: ID của thread (để tham chiếu).
|
||||
source: Nguồn nội dung (threads/reddit).
|
||||
"""
|
||||
if not title or not title.strip():
|
||||
return
|
||||
|
||||
_ensure_file_exists()
|
||||
|
||||
history = load_title_history()
|
||||
|
||||
# Kiểm tra trùng trước khi lưu
|
||||
normalized_title = title.strip().lower()
|
||||
for entry in history:
|
||||
if entry.get("title", "").strip().lower() == normalized_title:
|
||||
print_substep(f"Title đã tồn tại trong lịch sử, bỏ qua: {title[:50]}...", style="dim")
|
||||
return
|
||||
|
||||
entry = {
|
||||
"title": title.strip(),
|
||||
"thread_id": thread_id,
|
||||
"source": source,
|
||||
"created_at": int(time.time()),
|
||||
}
|
||||
history.append(entry)
|
||||
|
||||
with open(TITLE_HISTORY_PATH, "w", encoding="utf-8") as f:
|
||||
json.dump(history, f, ensure_ascii=False, indent=4)
|
||||
|
||||
print_substep(f"Đã lưu title vào lịch sử: {title[:50]}...", style="bold green")
|
||||
|
||||
|
||||
def get_title_count() -> int:
|
||||
"""Đếm số title đã sử dụng.
|
||||
|
||||
Returns:
|
||||
Số lượng title trong lịch sử.
|
||||
"""
|
||||
return len(load_title_history())
|
||||
@ -0,0 +1,343 @@
|
||||
"""
|
||||
Threads Screenshot Generator - Tạo hình ảnh giả lập giao diện Threads.
|
||||
|
||||
Sử dụng Pillow để render hình ảnh thay vì chụp screenshot từ trình duyệt,
|
||||
vì Threads không có giao diện web tĩnh dễ chụp như Reddit.
|
||||
"""
|
||||
|
||||
import os
|
||||
import re
|
||||
import textwrap
|
||||
from pathlib import Path
|
||||
from typing import Dict, Final, List, Optional, Tuple
|
||||
|
||||
from PIL import Image, ImageDraw, ImageFont
|
||||
from rich.progress import track
|
||||
|
||||
from utils import settings
|
||||
from utils.console import print_step, print_substep
|
||||
|
||||
# Threads color themes
|
||||
THEMES = {
|
||||
"dark": {
|
||||
"bg_color": (0, 0, 0),
|
||||
"card_bg": (30, 30, 30),
|
||||
"text_color": (255, 255, 255),
|
||||
"secondary_text": (140, 140, 140),
|
||||
"border_color": (50, 50, 50),
|
||||
"accent_color": (0, 149, 246), # Threads blue
|
||||
"reply_line": (60, 60, 60),
|
||||
},
|
||||
"light": {
|
||||
"bg_color": (255, 255, 255),
|
||||
"card_bg": (255, 255, 255),
|
||||
"text_color": (0, 0, 0),
|
||||
"secondary_text": (130, 130, 130),
|
||||
"border_color": (219, 219, 219),
|
||||
"accent_color": (0, 149, 246),
|
||||
"reply_line": (200, 200, 200),
|
||||
},
|
||||
}
|
||||
|
||||
# Avatar color palette for comments
|
||||
AVATAR_COLORS = [
|
||||
(88, 101, 242), # Blue
|
||||
(237, 66, 69), # Red
|
||||
(87, 242, 135), # Green
|
||||
(254, 231, 92), # Yellow
|
||||
(235, 69, 158), # Pink
|
||||
]
|
||||
|
||||
|
||||
def _get_font(size: int, bold: bool = False) -> ImageFont.FreeTypeFont:
|
||||
"""Load font hỗ trợ tiếng Việt."""
|
||||
font_dir = os.path.join(os.path.dirname(os.path.dirname(__file__)), "fonts")
|
||||
if bold:
|
||||
font_path = os.path.join(font_dir, "Roboto-Bold.ttf")
|
||||
else:
|
||||
font_path = os.path.join(font_dir, "Roboto-Medium.ttf")
|
||||
|
||||
try:
|
||||
return ImageFont.truetype(font_path, size)
|
||||
except (OSError, IOError):
|
||||
return ImageFont.load_default()
|
||||
|
||||
|
||||
def _wrap_text(text: str, font: ImageFont.FreeTypeFont, max_width: int) -> List[str]:
|
||||
"""Wrap text to fit within max_width pixels."""
|
||||
words = text.split()
|
||||
lines = []
|
||||
current_line = ""
|
||||
|
||||
for word in words:
|
||||
test_line = f"{current_line} {word}".strip()
|
||||
bbox = font.getbbox(test_line)
|
||||
if bbox[2] <= max_width:
|
||||
current_line = test_line
|
||||
else:
|
||||
if current_line:
|
||||
lines.append(current_line)
|
||||
current_line = word
|
||||
|
||||
if current_line:
|
||||
lines.append(current_line)
|
||||
|
||||
return lines if lines else [""]
|
||||
|
||||
|
||||
def _draw_avatar(draw: ImageDraw.Draw, x: int, y: int, size: int, color: Tuple[int, ...]):
|
||||
"""Vẽ avatar tròn placeholder."""
|
||||
draw.ellipse([x, y, x + size, y + size], fill=color)
|
||||
# Vẽ chữ cái đầu trong avatar
|
||||
font = _get_font(size // 2, bold=True)
|
||||
draw.text(
|
||||
(x + size // 4, y + size // 6),
|
||||
"T",
|
||||
fill=(255, 255, 255),
|
||||
font=font,
|
||||
)
|
||||
|
||||
|
||||
def create_thread_post_image(
|
||||
thread_obj: dict,
|
||||
theme_name: str = "dark",
|
||||
width: int = 1080,
|
||||
) -> Image.Image:
|
||||
"""Tạo hình ảnh cho bài viết Threads chính (title/post).
|
||||
|
||||
Args:
|
||||
thread_obj: Thread object chứa thông tin bài viết.
|
||||
theme_name: Theme name ("dark" hoặc "light").
|
||||
width: Chiều rộng hình ảnh.
|
||||
|
||||
Returns:
|
||||
PIL Image object.
|
||||
"""
|
||||
theme = THEMES.get(theme_name, THEMES["dark"])
|
||||
|
||||
padding = 40
|
||||
content_width = width - (padding * 2)
|
||||
avatar_size = 60
|
||||
|
||||
# Fonts
|
||||
username_font = _get_font(28, bold=True)
|
||||
body_font = _get_font(32)
|
||||
meta_font = _get_font(22)
|
||||
|
||||
author = thread_obj.get("thread_author", "@user")
|
||||
text = thread_obj.get("thread_title", thread_obj.get("thread_post", ""))
|
||||
|
||||
# Tính chiều cao
|
||||
text_lines = _wrap_text(text, body_font, content_width - avatar_size - 30)
|
||||
line_height = 42
|
||||
text_height = len(text_lines) * line_height
|
||||
|
||||
total_height = padding + avatar_size + 20 + text_height + 60 + padding
|
||||
|
||||
# Tạo image
|
||||
img = Image.new("RGBA", (width, total_height), theme["bg_color"])
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
y_cursor = padding
|
||||
|
||||
# Avatar
|
||||
_draw_avatar(draw, padding, y_cursor, avatar_size, theme["accent_color"])
|
||||
|
||||
# Username
|
||||
draw.text(
|
||||
(padding + avatar_size + 15, y_cursor + 5),
|
||||
author,
|
||||
fill=theme["text_color"],
|
||||
font=username_font,
|
||||
)
|
||||
|
||||
# Timestamp
|
||||
draw.text(
|
||||
(padding + avatar_size + 15, y_cursor + 35),
|
||||
"🧵 Threads",
|
||||
fill=theme["secondary_text"],
|
||||
font=meta_font,
|
||||
)
|
||||
|
||||
y_cursor += avatar_size + 20
|
||||
|
||||
# Thread line (vertical line from avatar to content)
|
||||
line_x = padding + avatar_size // 2
|
||||
draw.line(
|
||||
[(line_x, padding + avatar_size + 5), (line_x, y_cursor - 5)],
|
||||
fill=theme["reply_line"],
|
||||
width=3,
|
||||
)
|
||||
|
||||
# Body text
|
||||
for line in text_lines:
|
||||
draw.text(
|
||||
(padding + 10, y_cursor),
|
||||
line,
|
||||
fill=theme["text_color"],
|
||||
font=body_font,
|
||||
)
|
||||
y_cursor += line_height
|
||||
|
||||
# Interaction bar
|
||||
y_cursor += 20
|
||||
icons = "❤️ 💬 🔄 ✈️"
|
||||
draw.text(
|
||||
(padding + 10, y_cursor),
|
||||
icons,
|
||||
fill=theme["secondary_text"],
|
||||
font=meta_font,
|
||||
)
|
||||
|
||||
return img
|
||||
|
||||
|
||||
def create_comment_image(
|
||||
comment: dict,
|
||||
index: int,
|
||||
theme_name: str = "dark",
|
||||
width: int = 1080,
|
||||
) -> Image.Image:
|
||||
"""Tạo hình ảnh cho một reply/comment trên Threads.
|
||||
|
||||
Args:
|
||||
comment: Comment dict.
|
||||
index: Số thứ tự comment.
|
||||
theme_name: Theme name.
|
||||
width: Chiều rộng hình ảnh.
|
||||
|
||||
Returns:
|
||||
PIL Image object.
|
||||
"""
|
||||
theme = THEMES.get(theme_name, THEMES["dark"])
|
||||
|
||||
padding = 40
|
||||
content_width = width - (padding * 2)
|
||||
avatar_size = 50
|
||||
|
||||
# Fonts
|
||||
username_font = _get_font(24, bold=True)
|
||||
body_font = _get_font(30)
|
||||
meta_font = _get_font(20)
|
||||
|
||||
author = comment.get("comment_author", f"@user{index}")
|
||||
text = comment.get("comment_body", "")
|
||||
|
||||
# Tính chiều cao
|
||||
text_lines = _wrap_text(text, body_font, content_width - avatar_size - 30)
|
||||
line_height = 40
|
||||
text_height = len(text_lines) * line_height
|
||||
|
||||
total_height = padding + avatar_size + 15 + text_height + 40 + padding
|
||||
|
||||
# Tạo image
|
||||
img = Image.new("RGBA", (width, total_height), theme["bg_color"])
|
||||
draw = ImageDraw.Draw(img)
|
||||
|
||||
y_cursor = padding
|
||||
|
||||
# Reply line at top
|
||||
draw.line(
|
||||
[(padding, 0), (padding, y_cursor)],
|
||||
fill=theme["reply_line"],
|
||||
width=2,
|
||||
)
|
||||
|
||||
# Avatar (smaller for comments)
|
||||
avatar_color = AVATAR_COLORS[index % len(AVATAR_COLORS)]
|
||||
_draw_avatar(draw, padding, y_cursor, avatar_size, avatar_color)
|
||||
|
||||
# Username
|
||||
draw.text(
|
||||
(padding + avatar_size + 12, y_cursor + 5),
|
||||
author,
|
||||
fill=theme["text_color"],
|
||||
font=username_font,
|
||||
)
|
||||
|
||||
# Time indicator
|
||||
draw.text(
|
||||
(padding + avatar_size + 12, y_cursor + 30),
|
||||
"Trả lời",
|
||||
fill=theme["secondary_text"],
|
||||
font=meta_font,
|
||||
)
|
||||
|
||||
y_cursor += avatar_size + 15
|
||||
|
||||
# Body text
|
||||
for line in text_lines:
|
||||
draw.text(
|
||||
(padding + 10, y_cursor),
|
||||
line,
|
||||
fill=theme["text_color"],
|
||||
font=body_font,
|
||||
)
|
||||
y_cursor += line_height
|
||||
|
||||
# Bottom separator
|
||||
y_cursor += 10
|
||||
draw.line(
|
||||
[(padding, y_cursor), (width - padding, y_cursor)],
|
||||
fill=theme["border_color"],
|
||||
width=1,
|
||||
)
|
||||
|
||||
return img
|
||||
|
||||
|
||||
def get_screenshots_of_threads_posts(thread_object: dict, screenshot_num: int):
|
||||
"""Tạo screenshots cho bài viết Threads.
|
||||
|
||||
Thay thế get_screenshots_of_reddit_posts() cho Threads.
|
||||
|
||||
Args:
|
||||
thread_object: Thread object từ threads_client.py.
|
||||
screenshot_num: Số lượng screenshots cần tạo.
|
||||
"""
|
||||
W: Final[int] = int(settings.config["settings"]["resolution_w"])
|
||||
H: Final[int] = int(settings.config["settings"]["resolution_h"])
|
||||
theme: str = settings.config["settings"].get("theme", "dark")
|
||||
storymode: bool = settings.config["settings"].get("storymode", False)
|
||||
|
||||
print_step("Đang tạo hình ảnh cho bài viết Threads...")
|
||||
|
||||
thread_id = re.sub(r"[^\w\s-]", "", thread_object["thread_id"])
|
||||
Path(f"assets/temp/{thread_id}/png").mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Tạo hình ảnh cho bài viết chính (title)
|
||||
title_img = create_thread_post_image(
|
||||
thread_object,
|
||||
theme_name=theme if theme in THEMES else "dark",
|
||||
width=W,
|
||||
)
|
||||
title_img.save(f"assets/temp/{thread_id}/png/title.png")
|
||||
print_substep("Đã tạo hình ảnh tiêu đề", style="bold green")
|
||||
|
||||
if storymode:
|
||||
# Story mode - chỉ cần 1 hình cho toàn bộ nội dung
|
||||
story_img = create_thread_post_image(
|
||||
{
|
||||
"thread_author": thread_object.get("thread_author", "@user"),
|
||||
"thread_title": thread_object.get("thread_post", ""),
|
||||
},
|
||||
theme_name=theme if theme in THEMES else "dark",
|
||||
width=W,
|
||||
)
|
||||
story_img.save(f"assets/temp/{thread_id}/png/story_content.png")
|
||||
else:
|
||||
# Comment mode - tạo hình cho từng reply
|
||||
comments = thread_object.get("comments", [])[:screenshot_num]
|
||||
for idx, comment in enumerate(track(comments, "Đang tạo hình ảnh replies...")):
|
||||
if idx >= screenshot_num:
|
||||
break
|
||||
|
||||
comment_img = create_comment_image(
|
||||
comment,
|
||||
index=idx,
|
||||
theme_name=theme if theme in THEMES else "dark",
|
||||
width=W,
|
||||
)
|
||||
comment_img.save(f"assets/temp/{thread_id}/png/comment_{idx}.png")
|
||||
|
||||
print_substep("Đã tạo tất cả hình ảnh thành công! ✅", style="bold green")
|
||||
Loading…
Reference in new issue