QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
โ |
|---|---|---|---|---|---|---|---|---|
79,774,032
| 9,072,753
|
How to make a copy of stdout that does not close?
|
<p>I would want to copy sys.stdout such that it will not be closed.</p>
<p>There is some generic code that does:</p>
<pre><code>def dosomething(fd: IO[bytes], input):
with fd as f:
for buf in input:
f.write(buf)
</code></pre>
<p>I would want to pass sys.stdout to it, in such a way, that it will not be closed with the <code>with</code> clause. Ideally:</p>
<pre><code>class NoCloseIO(IO[bytes]):
"""IO[bytes] that does not close"""
def close(self):
pass
dosomething(NoCloseIO(sys.stdout.buffer), io.StringIO("some text"))
</code></pre>
<p>How would I do that? I do not understand how to do it. Can I <code>reopen(sys.stdout)</code> somehow?</p>
<p>Ideally I would want to do this with any <code>TextIO</code> or <code>BinaryIO</code>.</p>
<hr />
<p>I see some confusion so let's add some color. I am doing my own <code>subprocess.run</code> interface over some protocol that uses websocket communication with a remote process. <code>subprocess.run</code> surely closes the file descriptor on <code>stdout=PIPE</code> or <code>stdout=DEVNULL</code>, but does not close when <code>stdout=sys.stdout</code> or <code>stdout=1</code>. I am trying to create an interface that will properly pass the information about if a IO-thing should be closed or not "down" the abstractions to a decoding thread above websocket communication thread. I could add a separate variable to many functions... or my intention is to just mark <code>close()</code> as a no-op in specific cases and keep the code clean.</p>
|
<python><stream><stdout><python-3.7>
|
2025-09-24 17:58:32
| 2
| 145,478
|
KamilCuk
|
79,774,021
| 2,482,575
|
How to display local time from a UTC time
|
<p>I have some data with date time in UTC. My local time is usually 5 or 6 hours behind depending on standard or daylight time. I would like to print the local time.</p>
<p>Is there an easy way to do that in python, or do I have to manually do a timedelta on the hours with the time offset?</p>
<p>So example, I have a date string of: 2025-10-01T19:20:00.000Z, and my end goal is to print a date of Wed Oct 01 2:20 PM.</p>
<p>Here is my code:</p>
<pre><code>from datetime import datetime, timezone, timedelta
from tzlocal import get_localzone
from zoneinfo import ZoneInfo
test_date = "2025-10-01T19:20:00.000Z"
d = datetime.fromisoformat(test_date[:-1]).astimezone(ZoneInfo("America/Chicago"))
print(d)
print(d.strftime('%a %b %d %-I:%M %p').upper())
</code></pre>
<p>and the output is:</p>
<pre class="lang-none prettyprint-override"><code>2025-10-01 19:20:00-05:00
WED OCT 01 7:20 PM
</code></pre>
<p>I'd like to be able to output WED OCT 01 2:20 PM as that would show local time.</p>
|
<python><datetime>
|
2025-09-24 17:38:02
| 1
| 439
|
tman
|
79,773,690
| 856,976
|
Building a Python extension with a local C++ library dependency using setuptools and build
|
<p>I have a library written in C++ for which I would like to make a Python wrapper. The directory structure of my repository is similar to the following:</p>
<pre><code>.
โโโ include # C++ library headers
โย ย โโโ โฆ
โโโ Makefile
โโโ python # Python package
โ โโโ wrapper-module # C++ and Python source files
โ โ โโโ โฆ
โ โโโ pyproject.toml
โ โโโ setup.py
โย ย โโโ โฆ
โโโ src # C++ library source files
โย ย โโโ โฆ
โโโ tests # C++ library tests
ย ย โโโ โฆ
</code></pre>
<p>I use GNU make for building the C++ library and use Pythonโs C API directly to implement the wrapper. (I.e. no Boost.Python, Cython etc.)</p>
<p>Building the Python package has been unsuccessful so far. Since I use the C API, I chose setuptools 80.9.0 for building the module. When running <code>python -m build</code>, the relevant files are copied to a separate location, even if I use <code>--no-isolation</code>. So far I have been unable to make the C++ library available in that location or determine their relative path with respect to that location. (I would like to avoid specifying its absolute path or installing it to e.g. <code>/usr</code>.) The source files for the wrapper module (or at least most of them) are copied as expected.</p>
<p>So far I have attempted to solve the issue as follows:</p>
<ul>
<li>Checking the documentation. What I have found so far has had to do with Python dependencies, not โlocalโ ones.</li>
<li>Copying the C++ library build products to the Python directory with a build phase in the Makefile. In this case the build products are not copied to the separate build location. I have not found a configuration option to specify the directory in question as a build dependency.</li>
<li>Adding a custom build phase in <code>setup.py</code>. In this case I have not found a function or property to retrieve the original path in order to copy the required files as part of the build process.</li>
<li>Writing a custom build backend based on <code>setuptools.build_meta</code>. In this case I can determine the original path using <code>os.getcwd()</code>. However, the module that contains the custom backend is not copied to the separate build location even though I have set <code>build-backend</code> and <code>backend-path</code> in <code>pyproject.toml</code>. (In any case I probably would need to store the original path somehow, to which I have not paid much attention.)</li>
</ul>
<p>Any suggestions for making setuptools aware of the location of the built C++ library and the associated header files are appreciated.</p>
|
<python><python-3.x><setuptools><python-c-api><pyproject.toml>
|
2025-09-24 12:06:38
| 2
| 2,139
|
tsnorri
|
79,773,595
| 18,910,865
|
Does `dag.test()` in Airflow write to the metadata database?
|
<p>Iโm debugging my DAGs in PyCharm using the <a href="https://airflow.apache.org/docs/apache-airflow/stable/_api/airflow/models/dag/index.html#airflow.models.dag.DAG.test" rel="nofollow noreferrer"><code>dag.test()</code></a> method (Airflow 2.9.1). I noticed that after running it a few times, my environment got into a weird state (i.e.: multiple calls to one of the endpoints which the DAG is pointed to, probably due to retries), and Iโm not sure if itโs because <code>dag.test()</code> is writing something to the Airflow metadata DB.</p>
<p>After a few launches of the DAG it became impossible to get to a certain task, and I started getting the following error:</p>
<pre><code>{dag.py:2923} WARNING - No tasks to run. unrunnable tasks:
{<TaskInstance: dags-np-flow.split_payload manual__2025-09-23T10:27:22.745934+00:00 [None]>,
<TaskInstance: dags-np-flow.filter_dm_ok manual__2025-09-23T10:27:22.745934+00:00 [None]>,
<TaskInstance: dags-np-flow.fetch_triggered_dag_xcom manual__2025-09-23T10:27:22.745934+00:00 [None]>,
<TaskInstance: dags-np-flow.recall_step manual__2025-09-23T10:27:22.745934+00:00 [None]>,
<TaskInstance: dags-np-flow.recall_dm_step manual__2025-09-23T10:27:22.745934+00:00 [running]>}
</code></pre>
<p>I was initially able to solve it via <code>airflow tasks clear dags-to-run --yes</code>, but after a while I was forced to do <code>ariflow db reset</code> in order to be able to run it again.</p>
<h3>Questions</h3>
<ul>
<li>Does <code>dag.test()</code> ever create DAG runs, task instances, or XComs in the metadata database? I thought is was using entirely local resources.</li>
<li>Is <code>dag.test()</code> safe to use repeatedly in a development environment, or should I prefer another approach for local debugging?</li>
</ul>
|
<python><pycharm><airflow>
|
2025-09-24 10:27:52
| 1
| 522
|
Nauel
|
79,773,485
| 3,070,181
|
How to prevent uv with a tkinter application giving xcb error
|
<p>I am using uv to manage my python environments and I am getting a low level error message when I try to run a simple tkinter script</p>
<pre><code># xxx.py
import tkinter as tk
def main():
root = tk.Tk()
label = tk.Label(root, text="This line causes the problem")
label.pack()
root.mainloop()
if __name__ == "__main__":
main()
</code></pre>
<p>When I run the code using</p>
<pre><code>uv run xxx.py
</code></pre>
<p>I get an error message</p>
<pre><code> โ uv run xxx.py
Using CPython 3.13.6
Creating virtual environment at: .venv
[xcb] Unknown sequence number while appending request
[xcb] You called XInitThreads, this is not your fault
[xcb] Aborting, sorry about that.
python: xcb_io.c:157: append_pending_request: Assertion `!xcb_xlib_unknown_seq_number' failed.
</code></pre>
<p>I am NOT using threading!</p>
<p>My system is:</p>
<pre><code>Operating System: Manjaro Linux
KDE Plasma Version: 6.3.6
KDE Frameworks Version: 6.17.0
Qt Version: 6.9.1
Kernel Version: 6.6.90-1-MANJARO (64-bit)
Graphics Platform: X11
Processors: 20 ร 13th Gen Intelยฎ Coreโข i5-13500
Memory: 62.5 GiB of RAM
</code></pre>
<p>lspci | grep VGA</p>
<pre><code>01:00.0 VGA compatible controller: NVIDIA Corporation AD106 [GeForce RTX 4060 Ti 16GB] (rev a1)
</code></pre>
<p>I cannot fathom the problem</p>
|
<python><tkinter><xcb><uv>
|
2025-09-24 08:50:57
| 1
| 3,841
|
Psionman
|
79,773,306
| 18,064,892
|
How to establish a real-time socket connection between Laravel (PHP) and Python?
|
<p>Iโm building a real-time location tracking system with:</p>
<ul>
<li>Backend: Laravel 11.45.2 (PHP 8.2.29)</li>
<li>Frontend: Flutter</li>
<li>Hosting: cPanel</li>
</ul>
<p>So, how should I connect this? Am I approaching it the right way or not?
Constraints</p>
<ul>
<li>I donโt want to use Pusher because of the cost.</li>
<li>Laravel Reverb isnโt an option because it only relies on POST APIs.</li>
<li>BeyondCode WebSockets doesnโt support my PHP/Laravel versions.</li>
</ul>
<p><code>real_time_location_tracking.py</code></p>
<pre><code># real_time_location_tracking.py - Complete Flask-SocketIO Application for cPanel
import sys
import os
import logging
from datetime import datetime
# Add the project directory to Python path
sys.path.insert(0, os.path.dirname(__file__))
# Configure logging for cPanel
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s %(levelname)s: %(message)s',
handlers=[
logging.FileHandler('real_time_location_tracking.log'),
logging.StreamHandler(sys.stdout)
]
)
logger = logging.getLogger(__name__)
# Import eventlet and monkey patch BEFORE importing Flask
import eventlet
eventlet.monkey_patch()
from flask import Flask, request, jsonify
from flask_socketio import SocketIO, emit, join_room
import requests
import time
# ------------------------------
# Flask app setup
# ------------------------------
app = Flask(__name__)
# app.config['SECRET_KEY'] = 'real-time-location-tracking-secret-key-2024'
# Configure SocketIO for cPanel/Passenger
socketio = SocketIO(
app,
cors_allowed_origins="*",
transports=["polling"], # Force polling only for cPanel
async_mode="eventlet",
allow_upgrades=False, # Disable upgrades for stability
ping_timeout=60,
ping_interval=25,
max_http_buffer_size=1_000_000,
compression=False, # Disable compression to avoid issues
engineio_logger=False, # Disable to reduce log spam
logger=False
)
# Laravel API endpoint
LARAVEL_API = "https://io.fixhr.app/api/location/update"
# In-memory store for locations
live_locations = {}
# Requests session with proper headers
session = requests.Session()
session.headers.update({
"Content-Type": "application/json",
"Accept": "application/json",
"User-Agent": "RealTimeLocationTracker/1.0"
})
# ------------------------------
# Socket.IO Events
# ------------------------------
@socketio.on("connect")
def on_connect(auth):
client_id = request.sid
logger.info(f"โ
Client connected: {client_id}")
# Send confirmation with transport info
emit("connection_confirmed", {
"status": "connected",
"socketId": client_id,
"timestamp": datetime.now().isoformat(),
"transport": "polling",
"server": "RealTimeLocationTracker-cPanel"
})
return True
@socketio.on("disconnect")
def on_disconnect():
client_id = request.sid
logger.info(f"โ Client disconnected: {client_id}")
@socketio.on("joinTrackingRoom")
def on_join_tracking_room(data):
try:
user_id = data.get('userId', 'default')
room = f"tracking_{user_id}"
join_room(room)
logger.info(f"๐ User {user_id} joined tracking room: {room}")
emit("joinedTrackingRoom", {
"status": "success",
"room": room,
"socketId": request.sid,
"userId": user_id,
"timestamp": datetime.now().isoformat(),
"transport": "polling"
})
except Exception as e:
logger.error(f"โ Error joining room: {str(e)}")
emit("error", {"message": "Failed to join tracking room"})
def send_to_laravel_async(data, user_id):
"""Send location to Laravel API asynchronously"""
def _send():
start_time = time.time()
try:
response = session.post(LARAVEL_API, json=data, timeout=10)
elapsed = (time.time() - start_time) * 1000
if response.status_code == 200:
logger.info(f"โ
Location saved for {user_id} ({elapsed:.2f}ms)")
else:
logger.warning(f"โ ๏ธ Laravel API {response.status_code} for {user_id}: {response.text[:200]}")
except requests.exceptions.Timeout:
logger.error(f"โ Timeout sending location for {user_id}")
except requests.exceptions.ConnectionError:
logger.error(f"โ Connection error sending location for {user_id}")
except Exception as e:
logger.error(f"โ Error sending location for {user_id}: {str(e)}")
# Use eventlet spawn for async execution
eventlet.spawn(_send)
def handle_location(data, event_ack, event_broadcast):
"""Process location update"""
try:
user_id = data.get("userId", "unknown")
latitude = data.get("latitude")
longitude = data.get("longitude")
timestamp = data.get("timestamp", datetime.now().isoformat())
# Validate required fields
if not latitude or not longitude:
logger.warning(f"โ ๏ธ Invalid location from {user_id}: lat={latitude}, lng={longitude}")
emit("error", {"message": "Invalid location data - latitude and longitude required"})
return
# Convert to float and validate ranges
try:
lat_float = float(latitude)
lng_float = float(longitude)
if not (-90 <= lat_float <= 90) or not (-180 <= lng_float <= 180):
logger.warning(f"โ ๏ธ Invalid coordinates from {user_id}: lat={lat_float}, lng={lng_float}")
emit("error", {"message": "Invalid coordinates - latitude must be -90 to 90, longitude -180 to 180"})
return
except (ValueError, TypeError):
logger.warning(f"โ ๏ธ Non-numeric coordinates from {user_id}")
emit("error", {"message": "Coordinates must be valid numbers"})
return
# Store location in memory
live_locations[user_id] = {
"userId": user_id,
"latitude": lat_float,
"longitude": lng_float,
"timestamp": timestamp,
"last_updated": datetime.now().isoformat(),
"accuracy": data.get("accuracy", None),
"altitude": data.get("altitude", None),
"heading": data.get("heading", None),
"speed": data.get("speed", None)
}
logger.info(f"๐ Location updated for {user_id}: ({lat_float:.6f}, {lng_float:.6f})")
# Send acknowledgment to sender
emit(event_ack, {
"status": "received",
"userId": user_id,
"timestamp": datetime.now().isoformat(),
"originalTimestamp": timestamp,
"coordinates": {
"latitude": lat_float,
"longitude": lng_float
}
})
# Broadcast to room members
room = f"tracking_{user_id}"
emit(event_broadcast, {
"userId": user_id,
"latitude": lat_float,
"longitude": lng_float,
"timestamp": timestamp,
"server_timestamp": datetime.now().isoformat(),
"accuracy": data.get("accuracy"),
"source": "real_time_tracker"
}, room=room)
# Send to Laravel API asynchronously
send_to_laravel_async({
"userId": user_id,
"latitude": lat_float,
"longitude": lng_float,
"timestamp": timestamp,
"accuracy": data.get("accuracy"),
"source": "flutter_app"
}, user_id)
except Exception as e:
logger.error(f"โ Error in handle_location: {str(e)}")
emit("error", {"message": "Internal server error processing location"})
@socketio.on("locationUpdate")
def on_location_update(data):
"""Handle incoming location updates"""
try:
if not data:
emit("error", {"message": "No location data received"})
return
logger.info(f"๐ Received location update: {data}")
handle_location(data, "locationReceived", "location_broadcast")
except Exception as e:
logger.error(f"โ Error processing location update: {str(e)}")
emit("error", {"message": "Failed to process location update"})
@socketio.on("getLocationHistory")
def on_get_location_history(data):
"""Get location history for a user"""
try:
user_id = data.get("userId")
if user_id and user_id in live_locations:
emit("locationHistory", {
"status": "success",
"userId": user_id,
"location": live_locations[user_id],
"timestamp": datetime.now().isoformat()
})
else:
emit("locationHistory", {
"status": "not_found",
"userId": user_id,
"message": "No location data found for this user"
})
except Exception as e:
logger.error(f"โ Error getting location history: {str(e)}")
emit("error", {"message": "Failed to get location history"})
# ------------------------------
# HTTP Routes
# ------------------------------
@app.route("/")
def index():
"""Root endpoint with server info"""
return jsonify({
"status": "running",
"server": "Real-Time Location Tracker",
"version": "1.0.0",
"time": datetime.now().isoformat(),
"transport": "polling",
"locations_count": len(live_locations),
"active_users": len(live_locations),
"endpoints": {
"health": "/health",
"locations": "/api/locations",
"stats": "/api/stats",
"socket": "/socket.io/"
}
})
@app.route("/health")
def health_check():
"""Health check endpoint for monitoring"""
return jsonify({
"status": "healthy",
"server_time": datetime.now().isoformat(),
"locations_count": len(live_locations),
"active_users": len(live_locations),
"cpanel_mode": True,
"transport": "polling",
"uptime": "running",
"memory_usage": f"{len(live_locations)} locations stored"
})
@app.route("/api/locations")
def get_all_locations():
"""Get all stored locations"""
try:
return jsonify({
"status": "success",
"locations": list(live_locations.values()),
"count": len(live_locations),
"timestamp": datetime.now().isoformat(),
"server": "RealTimeLocationTracker"
})
except Exception as e:
logger.error(f"โ Error getting locations: {str(e)}")
return jsonify({"status": "error", "message": "Failed to retrieve locations"}), 500
@app.route("/api/location/<user_id>")
def get_location(user_id):
"""Get specific user location"""
try:
if user_id in live_locations:
return jsonify({
"status": "success",
"location": live_locations[user_id],
"timestamp": datetime.now().isoformat(),
"server": "RealTimeLocationTracker"
})
return jsonify({
"status": "not_found",
"message": f"No location found for user: {user_id}",
"available_users": list(live_locations.keys())
}), 404
except Exception as e:
logger.error(f"โ Error getting location for {user_id}: {str(e)}")
return jsonify({"status": "error", "message": "Failed to retrieve location"}), 500
@app.route("/api/stats")
def get_stats():
"""Get detailed server statistics"""
try:
current_time = datetime.now()
# Calculate active users (updated in last hour)
active_count = 0
for location_data in live_locations.values():
try:
last_updated = datetime.fromisoformat(location_data["last_updated"].replace("Z", "+00:00"))
if (current_time - last_updated.replace(tzinfo=None)).seconds < 3600:
active_count += 1
except:
pass
return jsonify({
"status": "success",
"stats": {
"total_locations": len(live_locations),
"active_users_last_hour": active_count,
"all_users": list(live_locations.keys()),
"server_time": current_time.isoformat(),
"transport": "polling",
"server": "RealTimeLocationTracker",
"version": "1.0.0",
"laravel_api": LARAVEL_API
}
})
except Exception as e:
logger.error(f"โ Error getting stats: {str(e)}")
return jsonify({"status": "error", "message": "Failed to retrieve statistics"}), 500
@app.route("/api/clear-locations")
def clear_locations():
"""Clear all stored locations (for testing)"""
try:
count = len(live_locations)
live_locations.clear()
logger.info(f"๐งน Manually cleared {count} locations")
return jsonify({
"status": "success",
"message": f"Cleared {count} locations",
"timestamp": datetime.now().isoformat()
})
except Exception as e:
logger.error(f"โ Error clearing locations: {str(e)}")
return jsonify({"status": "error", "message": "Failed to clear locations"}), 500
# ------------------------------
# Background cleanup task
# ------------------------------
def cleanup_old_locations():
"""Clean up stale locations every 30 minutes"""
while True:
try:
eventlet.sleep(1800) # 30 minutes
cutoff_time = datetime.now().timestamp() - 3600 # 1 hour ago
removed_count = 0
for uid in list(live_locations.keys()):
try:
data = live_locations[uid]
last_updated = data.get("last_updated", "")
# Parse timestamp (handle both Z and +00:00 formats)
timestamp_str = last_updated.replace("Z", "+00:00")
ts = datetime.fromisoformat(timestamp_str).timestamp()
if ts < cutoff_time:
del live_locations[uid]
removed_count += 1
logger.info(f"๐งน Removed stale location for {uid}")
except Exception as e:
# If there's any error parsing, remove the entry
if uid in live_locations:
del live_locations[uid]
removed_count += 1
logger.warning(f"๐งน Removed invalid location entry for {uid}: {e}")
if removed_count > 0:
logger.info(f"๐งน Cleanup completed: removed {removed_count} stale locations")
else:
logger.info(f"๐งน Cleanup completed: no stale locations found")
except Exception as e:
logger.error(f"โ Error in cleanup task: {str(e)}")
# Start cleanup task
eventlet.spawn(cleanup_old_locations)
# ------------------------------
# Error handlers
# ------------------------------
@app.errorhandler(404)
def not_found(error):
return jsonify({
"status": "error",
"message": "Endpoint not found",
"available_endpoints": ["/", "/health", "/api/locations", "/api/stats"]
}), 404
@app.errorhandler(500)
def internal_error(error):
logger.error(f"Internal server error: {str(error)}")
return jsonify({
"status": "error",
"message": "Internal server error",
"timestamp": datetime.now().isoformat()
}), 500
@app.errorhandler(Exception)
def handle_exception(e):
logger.error(f"Unhandled exception: {str(e)}")
return jsonify({
"status": "error",
"message": "An unexpected error occurred",
"timestamp": datetime.now().isoformat()
}), 500
# ------------------------------
# WSGI Application for Passenger
# ------------------------------
# โ
CRITICAL: Use Flask app as WSGI application
application = app
# ------------------------------
# Development mode
# ------------------------------
if __name__ == "__main__" or os.environ.get("FLASK_ENV") == "development":
logger.info("๐ Starting Real-Time Location Tracker in development mode")
socketio.run(
app,
host="0.0.0.0",
port=int(os.environ.get("PORT", 5000)),
debug=False, # Disable debug in production
use_reloader=False
)
else:
logger.info("๐ Real-Time Location Tracker loaded for production (cPanel/Passenger)")
logger.info(f"๐ Application ready - Server time: {datetime.now().isoformat()}")
</code></pre>
<p><code>passenger_wsgi.py</code></p>
<pre><code>#!/usr/bin/python3
# passenger_wsgi.py - WSGI Entry Point for cPanel/Passenger
# โ
Import required modules at the top
import sys
import os
import logging
from datetime import datetime
# Configure basic logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s %(levelname)s: %(message)s'
)
logger = logging.getLogger(__name__)
# Add your project directory to the sys.path
project_home = os.path.dirname(__file__)
if project_home not in sys.path:
sys.path.insert(0, project_home)
logger.info(f"๐ Project directory added to path: {project_home}")
logger.info(f"๐ Python path: {sys.path[:3]}...") # Show first 3 entries
try:
# Import the Flask application from real_time_location_tracking.py
logger.info("๐ Importing real_time_location_tracking module...")
from real_time_location_tracking import application
logger.info("โ
Successfully imported Flask application")
logger.info(f"๐ Application type: {type(application)}")
logger.info(f"โฐ Import completed at: {datetime.now().isoformat()}")
except ImportError as e:
logger.error(f"โ Import error: {e}")
logger.error("๐ Make sure 'real_time_location_tracking.py' exists and has 'application' variable")
# Create a simple error application
def application(environ, start_response):
status = '500 Internal Server Error'
headers = [
('Content-Type', 'application/json'),
('Access-Control-Allow-Origin', '*')
]
start_response(status, headers)
error_response = f'''{{
"status": "error",
"message": "Failed to import application module",
"error": "{str(e)}",
"timestamp": "{datetime.now().isoformat()}",
"help": "Check if real_time_location_tracking.py exists and contains 'application' variable"
}}'''
return [error_response.encode('utf-8')]
except Exception as e:
logger.error(f"โ General error during import: {e}")
logger.error(f"โ Error type: {type(e).__name__}")
# Create a simple error application
def application(environ, start_response):
status = '500 Internal Server Error'
headers = [
('Content-Type', 'application/json'),
('Access-Control-Allow-Origin', '*')
]
start_response(status, headers)
error_response = f'''{{
"status": "error",
"message": "Application startup failed",
"error": "{str(e)}",
"error_type": "{type(e).__name__}",
"timestamp": "{datetime.now().isoformat()}"
}}'''
return [error_response.encode('utf-8')]
# For debugging - this won't run in production but helps with testing
if __name__ == "__main__":
logger.info("๐งช WSGI module being tested directly")
logger.info(f"โ
Application object available: {application}")
print("โ
passenger_wsgi.py loaded successfully")
print(f"๐ Application: {application}")
print(f"โฐ Loaded at: {datetime.now().isoformat()}")
# Final confirmation log
logger.info("๐ฏ passenger_wsgi.py setup completed successfully")
logger.info(f"๐ Ready to serve requests via Passenger")
</code></pre>
<p>My flutter file code</p>
<pre><code>// enhanced_socket_service.dart - Updated for cPanel compatibility
import 'dart:async';
import 'dart:io';
import 'package:connectivity_plus/connectivity_plus.dart';
import 'package:hrm_employee/utils/fh_constant.dart';
import 'package:hrm_employee/utils/shared_preferences.dart';
import 'package:http/http.dart' as http;
import 'package:socket_io_client/socket_io_client.dart' as IO;
class SocketService {
static IO.Socket? _socket;
static bool _isConnected = false;
static bool _isConnecting = false;
static Timer? _reconnectTimer;
static int _reconnectAttempts = 0;
static const int _maxReconnectAttempts = 20; // Reduced for cPanel
static const Duration _reconnectDelay = Duration(seconds: 5);
static bool _shouldAutoReconnect = true;
// โ
cPanel optimized URL
static const String _serverUrl = 'https://socket.io.fixhr.app';
// Stream controllers for real-time data
static final StreamController<Map<String, dynamic>> _locationUpdatesController =
StreamController<Map<String, dynamic>>.broadcast();
static final StreamController<Map<String, dynamic>> _connectionStatusController =
StreamController<Map<String, dynamic>>.broadcast();
// Getters for streams
static Stream<Map<String, dynamic>> get locationUpdatesStream => _locationUpdatesController.stream;
static Stream<Map<String, dynamic>> get connectionStatusStream => _connectionStatusController.stream;
static bool get isConnected => _isConnected;
static bool get isConnecting => _isConnecting;
static String? currentUserDeviceId = '';
static Future<bool> testServerConnectivity() async {
try {
final response = await http.get(
Uri.parse('$_serverUrl/health'),
headers: {
'Connection': 'keep-alive',
'User-Agent': 'Flutter-Client/1.0',
'Accept': 'application/json',
},
).timeout(const Duration(seconds: 15));
if (response.statusCode == 200) {
return true;
}
return false;
} catch (_) {
return false;
}
}
static Future<void> connect() async {
if (_isConnecting) return;
if (_isConnected) return;
if (!await checkInternetConnection()) return;
bool serverReachable = await testServerConnectivity();
if (!serverReachable) {
_handleConnectionFailure();
return;
}
_isConnecting = true;
_shouldAutoReconnect = true;
await getDevicesId();
await getDeviceIP();
try {
if (_socket != null) {
_socket!.dispose();
_socket = null;
}
_socket = IO.io(
_serverUrl,
IO.OptionBuilder()
.setTransports(['polling'])
.enableReconnection()
.setReconnectionAttempts(5)
.setReconnectionDelayMax(15000)
.setReconnectionDelay(3000)
.enableForceNew()
.setTimeout(20000)
.setPath('/socket.io/')
.setExtraHeaders({
'Connection': 'keep-alive',
'User-Agent': 'Flutter-App/1.0',
'Accept': '*/*',
'Origin': _serverUrl,
'Cache-Control': 'no-cache',
})
.setQuery({
'userId': currentUserDeviceId ?? 'flutter_client',
'transport': 'polling',
'client': 'flutter',
'version': '1.0',
't': DateTime.now().millisecondsSinceEpoch.toString(),
})
.build(),
);
Timer? connectionTimeout = Timer(const Duration(seconds: 20), () {
if (!_isConnected) {
_handleConnectionFailure();
}
});
_socket!.onConnect((_) {
connectionTimeout?.cancel();
_isConnected = true;
_isConnecting = false;
_reconnectAttempts = 0;
_cancelReconnectTimer();
_connectionStatusController
.add({'connected': true, 'socketId': _socket?.id, 'server': 'cPanel-Passenger', 'transport': 'polling'});
_setupEventListeners();
_joinLocationRoom();
});
_socket!.onDisconnect((reason) {
connectionTimeout?.cancel();
_isConnected = false;
_isConnecting = false;
_connectionStatusController.add({'connected': false, 'reason': reason, 'server': 'cPanel-Passenger'});
if (reason != 'io client disconnect' && _shouldAutoReconnect) {
_scheduleReconnect();
}
});
_socket!.onConnectError((_) {
connectionTimeout?.cancel();
_handleConnectionFailure();
});
_socket!.onError((_) {});
_socket!.connect();
} catch (_) {
_handleConnectionFailure();
}
}
static void _setupEventListeners() {
if (_socket == null) return;
_socket!.on('connection_confirmed', (data) {
try {
Map<String, dynamic> confirmationData = Map<String, dynamic>.from(data);
} catch (_) {}
});
_socket!.on('locationReceived', (data) {
try {
Map<String, dynamic> confirmationData = Map<String, dynamic>.from(data);
_locationUpdatesController.add(confirmationData);
} catch (_) {}
});
_socket!.on('joinedTrackingRoom', (_) {});
_socket!.on('notification', (_) {});
_socket!.on('message', (_) {});
_socket!.on('error', (_) {});
_socket!.on('location_broadcast', (data) {
try {
_locationUpdatesController.add(Map<String, dynamic>.from(data));
} catch (_) {}
});
}
static void _joinLocationRoom() {
if (_isConnected && _socket != null && currentUserDeviceId != null) {
final roomData = {
'userId': currentUserDeviceId,
'timestamp': DateTime.now().toIso8601String(),
'client': 'flutter',
};
_socket!.emit('joinTrackingRoom', roomData);
}
}
static Map<String, dynamic> getConnectionInfo() {
return {
'isConnected': _isConnected,
'isConnecting': _isConnecting,
'reconnectAttempts': _reconnectAttempts,
'shouldAutoReconnect': _shouldAutoReconnect,
'serverUrl': _serverUrl,
'socketId': _socket?.id,
'server': 'cPanel-Passenger',
'transport': 'polling',
'userId': currentUserDeviceId,
};
}
static Future<Map<String, dynamic>> getServerStats() async {
try {
final response = await http.get(
Uri.parse('$_serverUrl/api/stats'),
headers: {'Accept': 'application/json'},
).timeout(const Duration(seconds: 10));
if (response.statusCode == 200) {
return Map<String, dynamic>.from(Map<String, dynamic>.from(Map<String, dynamic>.from(response.body as Map)));
}
} catch (_) {}
return {'error': 'Failed to fetch stats'};
}
// Missing methods used in logic
static void _handleConnectionFailure() {
_isConnected = false;
_isConnecting = false;
_scheduleReconnect();
}
static void _scheduleReconnect() {
if (_reconnectAttempts >= _maxReconnectAttempts) return;
_reconnectTimer?.cancel();
_reconnectTimer = Timer(_reconnectDelay, () {
_reconnectAttempts++;
connect();
});
}
static void _cancelReconnectTimer() {
_reconnectTimer?.cancel();
_reconnectTimer = null;
}
}
</code></pre>
<p>The Error:</p>
<pre><code>
๐ก ๐ Connecting to cPanel server: https://socket.io.fixhr.app
I/flutter (11682): โ ๐ก ๐ Initiating socket connection to cPanel...
I/flutter (11682): โ ๐ก โ
Server health check - Status: 200
I/flutter (11682): โ ๐ก โ
Server response: {"active_users":0,"cpanel_mode":true,"locations_count":0,"memory_usage":"0 locations stored","server
I/flutter (11682): โ โ ๐ฅ cPanel connection error: WebSocketException: Connection to 'https://socket.io.fixhr.app:0/socket.io/?userId=047f92328d9ff9f1&transport=polling&client=flutter&version=1.0&t=1758691351486&EIO=4#' was not upgraded to websocket, HTTP status code: 200
</code></pre>
|
<python><php><laravel><flutter><websocket>
|
2025-09-24 06:00:09
| 1
| 406
|
Manish sahu
|
79,773,300
| 11,098,908
|
Coordinates satisfied conditions are outside of the expected area
|
<p>I have the following code to calculate and visualise the probability of scoring 10 points or more</p>
<p>This is the Scoreboard image that I used to run the code
<a href="https://i.sstatic.net/wtWE2Y8t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wtWE2Y8t.png" alt="enter image description here" /></a></p>
<pre><code>from PIL import Image, ImageDraw
import random, math
img = Image.open("ScoreBoard.png")
img = img.convert("RGB")
pixels = img.load()
width, height = img.size
radius = 3/10 * width
total_area = width * height
def circle(x, r):
return math.sqrt(r**2 - x**2)
N = 5000
black_pix_count = 0
for _ in range(N):
# Randomly select a number within the range of the width of the image.
rand_x = random.randint(0, width-1)
rand_y = random.randint(0, height-1)
if 2 * width/10 <= rand_x <= 8 * width/10:
rise = circle(width/2 - rand_x, radius)
if height/2 - rise <= rand_y <= height/2 + rise:
pixels[rand_x,rand_y] = (0,0,0)
black_pix_count += 1
scoring_area_ratio = black_pix_count / N
print(scoring_area_ratio)
img.show()
</code></pre>
<p>However, the output shows some black dots (score) were <em>slightly</em> outside on the <strong>left side</strong> of the circle of 10 points. What have I done wrong?<a href="https://i.sstatic.net/DadOxN4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DadOxN4E.png" alt="enter image description here" /></a></p>
|
<python><random><python-imaging-library>
|
2025-09-24 05:45:43
| 0
| 1,306
|
Nemo
|
79,773,215
| 1,503,005
|
Forward a Websocket connection using Flask
|
<p>Our app works as follows:</p>
<ul>
<li>Multiple remote machines connect to a central server using SSH connections</li>
<li>The central server has various Python processes which process data sent over these SSH connections and store it in a database</li>
<li>The central server also runs a bunch of Flask-based HTTP servers which process user requests and respond with data extracted from the DB; this forms the backend for the website which users access</li>
</ul>
<p>We have been asked to add support for in-browser VNC connections to the remote machines to the app. This will require the user's browser to connect with a service running on the remote machines via a websocket. The way this would work is:</p>
<ol>
<li>The remote machine, let's say <code>foo</code>, would run the VNC service, which will listen on a specific port expecting a websocket connection.</li>
<li>When <code>foo</code> connects to the central server via SSH, this socket will be forwarded, using <code>ssh -R</code>, to a socket file on the central server, let's say <code>/remote/vnc_websocket/foo/websocket.sock</code></li>
<li>When a user, let's call them Alice, want to VNC into machine <code>foo</code>, their browser sends a websocket request to <code>central.server/vnc/foo</code></li>
<li>The <code>/vnc/{machine}</code> route in our Flask app processes the request:
<ul>
<li>It checks the login token, sent in the request header, to confirm Alice is logged in</li>
<li>It accesses the DB to confirm that machine <code>foo</code> is connected, and that Alice has permission to access it</li>
</ul>
</li>
<li>Assuming all is well, the flask app then connects the incoming request from Alice to <code>/remote/vnc_websocket/foo/websocket.sock</code>, forwarding all traffic both ways</li>
</ol>
<p>I know how to handle steps 1-4, but I have no idea how to implement step 5. I've looked at <a href="https://flask-socketio.readthedocs.io/en/latest/" rel="nofollow noreferrer">Flask-SockerIO</a> and <a href="https://pypi.org/project/flask-websockets/" rel="nofollow noreferrer">Flask-Websockets</a>, but they're both focussed on implementing a Websocket server; I just want to connect a request to an existing server and then get out of the way.</p>
<p>I would prefer to integrate this solution into our existing Flask-based microservices, since we already have the code to handle things like checking logins, but if absolutely necessary I could implement a separate non-Flask-based microservice just for handling websocket requests. It would still have to be Python, though.</p>
|
<python><flask><websocket>
|
2025-09-24 01:48:23
| 1
| 635
|
macdjord
|
79,773,131
| 4,367,177
|
SQL Alchemy - result not bein pulled
|
<p>i have this class and this method of another class</p>
<pre><code>import sqlalchemy as sa
from sqlalchemy import Column, String, Date
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class MyTable(Base):
__tablename__ = 'tablename'
__table_args__ = {'schema': 'some_schema'}
patient = Column(String, primary_key=True)
date_attribute = Column(Date, primary_key=True)
other_attribute = Column(String, primary_key=True)
</code></pre>
<p>This code is then imported into another class</p>
<pre><code>from src.utils.db_utils import create_sqlalchemy_engine
from src.models.model_input.table_file import MyTable
from typing import List
from sqlalchemy import select, and_
from sqlalchemy.orm import sessionmaker, scoped_session
class MyOtherClass():
def __init__(...):
def _create_list(self) -> List[str]:
eng = create_sqlalchemy_engine(client_database)
date_value = '2025-05-19'
hardcoded_value = 'Foobar'
session = scoped_session(sessionmaker(bind=eng))
stmt = select(MyTable.patient).where(MyTable.date_attribute == date_value).where(MyTable.other_attribute == hardcoded_value)
patient_list = session.scalars(stmt).first()
</code></pre>
<p>When I run through this code, i get a <code>None</code> for thie value of patient_list.</p>
<p>am I writing my select statement incorrectly? can I not do a chained <code>.where()</code> statements on the same select clause?</p>
|
<python><sqlalchemy>
|
2025-09-23 22:36:36
| 0
| 302
|
BMac
|
79,773,065
| 926,918
|
Optuna: Selection of parameters during k-fold CV
|
<p>I am using Optuna for hyperparameter tuning. I get messages as shown below:</p>
<pre><code>Trial 15 finished with value: 6.226334123011727 and parameters: {'iterations': 1100, 'learning_rate': 0.04262148853587423, 'depth': 6, 'l2_leaf_reg': 6.63997127673657, 'border_count': 46, 'bagging_temperature': 4.932254276656362, 'random_strength': 3.499938575269665}.
</code></pre>
<p>So, my question is, the parameters printed are the ones that are sent by optuna to k folds or are there some parameters from 1 or more folds (eg: <code>n_estimators</code>) that would later update the printed data?</p>
<p>Thanks in advance for any help.</p>
|
<python><hyperparameters><optuna>
|
2025-09-23 20:50:54
| 0
| 1,196
|
Quiescent
|
79,773,030
| 1,747,834
|
How to remove certain compiler flags, when compiling a Python extension through setup.py?
|
<p>My extension is built using <code>python3 setup.py build</code>, which uses all the flags Python wants to -- followed by my own, specified in <code>extra_compile_flags</code>.</p>
<p>I'd like to remove certain flags. For example:</p>
<ul>
<li><code>-Wp,-D_FORTIFY_SOURCE=2</code></li>
<li><code>-fstack-protector-strong</code></li>
<li><code>-Wstrict-prototypes</code></li>
</ul>
<p>Is there a standard way to do it?</p>
<p>As a work-around I append my own flags to <code>extra_compile_flags</code> to negate each of the above:</p>
<ul>
<li><code>-Wp,-U_FORTIFY_SOURCE</code></li>
<li><code>-fstack-protector-explicit</code></li>
<li><code>-Wno-strict-prototypes</code></li>
</ul>
<p>But I'd rather those flags weren't there to begin with -- is that possible?</p>
|
<python><setup.py>
|
2025-09-23 20:03:25
| 0
| 4,246
|
Mikhail T.
|
79,772,987
| 9,357,484
|
python setup.py develop did not run successfully
|
<p>I ran the official Colab Page of GroundingDINO, but received an error while running the notebook.</p>
<p>Code Block</p>
<pre><code>%cd {HOME}
!git clone https://github.com/IDEA-Research/GroundingDINO.git
%cd {HOME}/GroundingDINO
!pip install -q -e .
!pip install -q roboflow
</code></pre>
<p>Error:</p>
<pre><code>/content
Cloning into 'GroundingDINO'...
remote: Enumerating objects: 463, done.
remote: Total 463 (delta 0), reused 0 (delta 0), pack-reused 463 (from 1)
Receiving objects: 100% (463/463), 12.91 MiB | 42.50 MiB/s, done.
Resolving deltas: 100% (221/221), done.
/content/GroundingDINO
Preparing metadata (setup.py) ... done
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 46.8/46.8 kB 3.1 MB/s eta 0:00:00
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 207.2/207.2 kB 13.6 MB/s eta 0:00:00
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 256.2/256.2 kB 25.0 MB/s eta 0:00:00
error: subprocess-exited-with-error
ร python setup.py develop did not run successfully.
โ exit code: 1
โฐโ> See above for output.
Note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
ร python setup.py develop did not run successfully.
โ exit code: 1
โฐโ> See above for output.
Note: This error originates from a subprocess, and is likely not a problem with pip.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 88.7/88.7 kB 4.8 MB/s eta 0:00:00
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 66.8/66.8 kB 6.8 MB/s eta 0:00:00
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 49.9/49.9 MB 14.8 MB/s eta 0:00:00
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 1.4/1.4 MB 76.8 MB/s eta 0:00:00
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 4.2/4.2 MB 120.0 MB/s eta 0:00:00
</code></pre>
<p>Please let me know how I can fix this issue.</p>
|
<python><pip><setuptools><setup.py>
|
2025-09-23 19:05:20
| 1
| 3,446
|
Encipher
|
79,772,860
| 2,562,750
|
Calling into a member function of a foreign C++ DLL with mangled names from Python
|
<p>I have a C++ DLL I want to call into from Python. <strong>I have no control over the C++ DLL nor do I have its source code or headers.</strong></p>
<p>The mangled functions are of the form:</p>
<pre><code>2980 BA3 005A3060 ?getFoo@FooLib@@YAAEAVFoo@1@XZ
2638 A4D 005A3020 ?getApplicationData@Foo@FooLib@@QEAAAEAVApplicationData@2@XZ
2639 A4E 005A3030 ?getApplicationData@Foo@FooLib@@QEBAAEBVApplicationData@2@XZ
2738 AB1 000F8A30 ?getDataRootPath@ApplicationData@FooLib@@QEBA?AV?$basic_string@_WU?$char_traits@_W@std@@V?$allocator@_W@2@@std@@XZ
</code></pre>
<p>With the aid of Copilot, I was able to translate these to (crossing fingers that this is right):</p>
<pre class="lang-cpp prettyprint-override"><code>Foo __cdecl FooLib::getFoo()
ApplicationData& __thiscall FooLib::Foo::getApplicationData()
const ApplicationData& __thiscall FooLib::Foo::getApplicationData() const
std::wstring __thiscall FooLib::ApplicationData::getDataRootPath() const
</code></pre>
<p>From Python, with <code>ctypes</code> docs, I was able assemble this:</p>
<pre class="lang-python prettyprint-override"><code>from ctypes import *
dll = WinDLL(r"c:\path\to\Foo.dll")
getFoo = getattr(dll, "?getFoo@FooLib@@YAAEAVFoo@1@XZ")
getFoo.argtypes = []
getFoo.restype = c_void_p
getAppData = getattr(dll, "?getApplicationData@Foo@FooLib@@QEAAAEAVApplicationData@2@XZ")
getAppData.argtypes = [c_void_p]
getAppData.restype = c_void_p
getAppData(getFoo()) # returns a pointer reliably, same value each time
</code></pre>
<p>Note that as far as Python <code>ctypes</code> is concerned, the two DLL entries for <code>getApplicationData</code> produce the same values.</p>
<p>However, the final function does not work, crashing Python or throwing an Access Violation, likely because it returns a C++ string type, and <code>ctypes</code> cannot handle that. Best recommendations I've seen have been to create a shim DLL that can call the C++ function and convert it to a C string, which Python <code>ctypes</code> can handle.</p>
<p>On the shim side, I can load the function pointer addresses, but two parts seem to be a problem for the <code>getApplicationData</code> and <code>getDataRootPath</code> functions, which are member functions of the <code>Foo</code> class and <code>ApplicationData</code> classes, respectively. Solutions to this kind of problem on the internet seem to be sparse, or at minimum not fully explained. I'm sure there are also subtleties about addresses and pointers in C++ that are getting in my way as well.</p>
<p><strong>So: How can I solve this issue?</strong></p>
|
<python><c++><windows><dll><interop>
|
2025-09-23 16:29:50
| 3
| 1,104
|
Nick Bauer
|
79,772,776
| 357,546
|
Django migration successfully applied, but the database is not modified
|
<p>I need to use a secondary SQLite database in a new Django project. This database is on the local filesystem but outside the Django folder. Its path is specified in a <code>.env</code> file at the root of the Django project.</p>
<p>I want Django to be able to manage migrations on that database, but I already have data in it, which I don't want to loose.</p>
<p>I was able to integrate the database into the Django project, and I see no error at any point. I can fetch data from the database via the Django shell. However, when I try to apply migrations, nothing happens: the database is not modified, but Django doesn't give me any error (in fact it says the migration has been applied).</p>
<p>Here's what I did:</p>
<ol>
<li>created an "archiver" app within Django</li>
<li>within this app, created a <code>routers.py</code> file with the following code:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>class ArchiverDbRouter:
def db_for_read(self, model, **hints):
if model._meta.app_label in ['archiver']:
return 'archiver'
return None
def db_for_write(self, model, **hints):
if model._meta.app_label in ['archiver']:
return 'archiver'
return None
def allow_migrate(self, db, app_label, model_name=None, **hints):
if app_label in ['archiver']:
return db == 'archiver'
return None
</code></pre>
<ol start="3">
<li>configured <code>settings.py</code> to use two databases. The idea is to keep the default database for everything Django, and then the "archiver" database for the "archiver" app.</li>
</ol>
<pre><code>import os
from pathlib import Path
from dotenv import load_dotenv
load_dotenv()
USER_DIR = Path(os.getenv('USER_DIR', './user'))
(...)
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
},
'archiver': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': USER_DIR / 'data/app.db',
}
}
DATABASE_ROUTERS = ['archiver.routers.ArchiverDbRouter']
</code></pre>
<ol start="4">
<li>generated my models with the <code>inspectdb</code> command:</li>
</ol>
<pre><code>python manage.py inspectdb --database archiver > tempmodels.py
</code></pre>
<p>Then tweaked those models and saved them to <code>archiver/models.py</code>. I removed the <code>managed = False</code> properties in all models.</p>
<ol start="5">
<li>initialized the database</li>
</ol>
<pre><code>python manage.py migrate
</code></pre>
<p>This creates the "default" database file.</p>
<ol start="6">
<li>generated the migrations for archiver</li>
</ol>
<pre><code>python manage.py makemigrations archiver
</code></pre>
<p>The <code>0001_initial.py</code> migration file is created.</p>
<ol start="7">
<li>applied this migration with the <code>--fake</code> flag</li>
</ol>
<pre><code>python manage.py migrate archiver 0001 --fake
</code></pre>
<p>I can see the corresponding migration saved in the <code>django_migrations</code> table.</p>
<p>At this point, I can use the Django shell and access the actual data in my "archiver" database, which seems to confirm that the routing works correctly and the database is found by Django. E.g.</p>
<pre><code>>>> q = State(account="test", name="test", value="test")
>>> q.save()
</code></pre>
<p>Then I see that the new line (with the three "test" values) is present in the "states" table of m "archiver" database (using a third-party tool, HeidiSQL). I can also see that the modified date for the database file has been updated.</p>
<ol start="8">
<li><p>made some changes to my <code>models.py</code>, by removing a field that was never used in the <code>Post</code> model.</p>
</li>
<li><p>generated the migrations again</p>
</li>
</ol>
<pre><code>python manage.py makemigrations archiver
</code></pre>
<p>The migration file is created:</p>
<pre><code>from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('archiver', '0001_initial'),
]
operations = [
migrations.RemoveField(
model_name='post',
name='note',
),
]
</code></pre>
<ol start="10">
<li>applied the new migration</li>
</ol>
<pre><code>python manage.py migrate archiver
</code></pre>
<p>This gives me the output:</p>
<pre class="lang-bash prettyprint-override"><code>Operations to perform:
Apply all migrations: archiver
Running migrations:
Applying archiver.0002_remove_post_note... OK
</code></pre>
<p>No error. I can see the corresponding migration saved in the <code>django_migrations</code> table.</p>
<p>HOWEVER, when I explore the archiver database (using HeidiSQL again), the "note" field is still present. Also, the "modified date" for the database file has NOT changed.</p>
<p>What am I missing?</p>
|
<python><django><sqlite><database-migration>
|
2025-09-23 15:05:42
| 1
| 1,618
|
s427
|
79,772,771
| 279,711
|
Write tests that check the order in which actions are done
|
<p>I'm encountering a recurring challenge when using TDD to develop code that must perform actions in a specific order. Usually I'm able to write tests such that it is only possible to create a correct solution. However, when I need to test code that is required to perform actions in a specified sequence this becomes more complex that I would want.</p>
<p>Let's take the example of a program that has to send AT commands and process the responses. Somewhere in the code you'll eventually find a function that takes a command as an argument, and returns the received response.</p>
<p>Here is a test case that I would write for such a function:</p>
<pre class="lang-py prettyprint-override"><code>def test_exchange_sends_a_command_and_returns_the_received_response():
stdout = io.StringIO()
stdin = io.StringIO("CONNECT\r\n")
response = exchange(stdout, stdin, "ATD")
assert stdout.getvalue() == "ATD\r\n"
assert response == "CONNECT"
</code></pre>
<p>A correct implementation would be:</p>
<pre class="lang-py prettyprint-override"><code>def exchange(stdout, stdin, command):
stdout.write(command + "\r\n")
return stdin.readline().strip()
</code></pre>
<p>However, the following incorrect implementation would also pass the test:</p>
<pre class="lang-py prettyprint-override"><code>def exchange(stdout, stdin, command):
response = stdin.readline().strip()
stdout.write(command + "\r\n")
return response
</code></pre>
<p>One potential solution is to implement a stateful test double. For example:</p>
<pre class="lang-py prettyprint-override"><code>def test_exchange_command_sends_command_and_receives_response():
stdin = io.StringIO()
class StdoutStub(io.StringIO):
def write(self, buf):
super().write(buf)
if self.getvalue() == "ATD\r\n":
stdin.write("CONNECT\r\n")
stdin.seek(0)
stdout = StdoutStub()
response = modemcontrol.exchange(stdout, stdin, "ATD")
assert stdout.getvalue() == "ATD\r\n"
assert response == "CONNECT"
</code></pre>
<p>There are probably better ways to implement this test double, but in essence the same logic would be necessary.</p>
<p>Another solution which turned out to work is using Rust async code. A full example is way too complex, but it would boil down to something like this:</p>
<pre class="lang-rs prettyprint-override"><code>let exchange_task = tokio::spawn(
async move { exchange(stdout_stub, stdin_stub, "ATD").await }
);
// Check if command was send.
assert_eq!(stdout_stub.peek(), "ATD\r\n");
// Make the response available to the function
stdin_stub.put("CONNECT\r\n")
// Check response
let response = exchange_task.await.unwrap();
assert_eq!(response, "CONNECT");
</code></pre>
<p>I certainly wouldn't want to start using async in a project only to make the tests more simple. And I'm not sure if I would consider the code above 'simple' anyway.</p>
<p>Yet another option would probably be to use an event driven design. But again, that would result much more code.</p>
<p>Does anybody has a common pattern to use to test these kinds of functions?</p>
|
<python><tdd>
|
2025-09-23 15:02:43
| 2
| 1,701
|
Bart
|
79,772,749
| 5,241,389
|
How to set background seaborn style in a multi-plot figure with subplot()
|
<p>I have a multi-plot bar plot figure produced with matplotlib/ seaborn and I'd like to control the tick lines and background style. When I try to use sns.set_style("whitegrid"), the background is grey/ not whitegrid.</p>
<p>Example data, similar structure to mine:</p>
<pre><code>data = {
"Country": ["UK", "UK", "UK", "UK", "UK", "France", "France", "France", "France", "France"],
"Variable": ["A", "B", "C", "D", "E", "A", "B", "C", "D", "E"],
"Counts": [5, 10, 7, 9, 12, 25, 29, 21, 23, 31]
}
df = pd.DataFrame(data)
df
</code></pre>
<p>And here is the plotting code:</p>
<pre><code>fig, ax = plt.subplots(2,1, figsize=(7,7))
### Set seaborn theme
sns.set_style("whitegrid")
sns.set(font_scale=1.2)
fontsize = 15
fontsize_labels1 = 12
fontsize_labels2 = 11
### Colour palette for pfsa variants
col_uk = '#dd5129'
col_france = '#0f7ba2'
### Define limits
ylim = 40
country_text_y = 35
### plot 1 - UK
plot_counts_uk = df.loc[df['Country'] == "UK"]
p_uk = sns.barplot(data=plot_counts_uk, x="Variable", y="Counts",
color=col_uk, ax=ax[0])
p_uk.set_ylim(0, ylim)
ax[0].set(ylabel='Count', xlabel=None)
x_axis1 = p_uk.axes.get_xaxis()
x_axis1.set_visible(False)
p_uk.text(0, country_text_y, "UK", horizontalalignment='center', fontsize=fontsize_labels1, color='black')
### plot 2 - France
plot_counts_france = df.loc[df['Country'] == "France"]
variables_list = plot_counts_france.Variable.values.tolist()
p_france = sns.barplot(data=plot_counts_france, x="Variable", y="Counts",
color=col_france, ax=ax[1])
ax[1].set_xticklabels(variables_list, rotation=40, ha='right')
p_france.set_ylim(0, ylim)
ax[1].set(ylabel='Count', xlabel=None)
x_axis2 = p_france.axes.get_xaxis()
x_axis2.set_visible(True)
p_france.text(0, country_text_y, "France", horizontalalignment='center', fontsize=fontsize_labels1, color='black')
fig.tight_layout()
</code></pre>
<p>Despite sns.set_style("whitegrid"), the plots all have a grey background. I have tried adding facecolor='#FFFFFF' but there is no change:</p>
<pre><code>fig, ax = plt.subplots(2,1, figsize=(7,7), facecolor='#FFFFFF')
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/Fys0zzwV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fys0zzwV.png" alt="Code output - grey background" /></a></p>
<p>Changing the sns.set_style() argument to other things like sns.set_style("dark") also doesn't change anything.</p>
<p>Ideally I'd also like to control the tick markings on the plots, though that depends what it looks like if it actually had a whitegrid style. What I have in mind is vertical minor lines in the centre of each bar to keep track of the x-axis labels (A, B, C, D, E), and to set the y-axis ticks to something sensible that would apply to all the plots. A lot of that may be accomplished by the whitegrid style but I can't see what that looks like.</p>
<p>Thanks for the help!</p>
|
<python><pandas><matplotlib><seaborn>
|
2025-09-23 14:46:05
| 1
| 407
|
Will Hamilton
|
79,772,712
| 2,578,235
|
Problem with running rust compiler in subprocess with memory limits
|
<p>I want to execute rust code in the Python subprocesses. But when I try to limit process memory error I keep getting errors from rust. What might be the issue?</p>
<pre class="lang-py prettyprint-override"><code>
import os
import resource
import pytest
import subprocess
import platform
import fcntl
TIMEOUT = 15
PLANG = "rust"
IO_BUF_BLOCK_SZ = 4096
DEFAULT_IO_BUF_SZ = 256 * 1024 * 1024
MAX_PRC_VIRT_MEM = 100 * 1024 * 1024
MAX_PRC_STACK_MEM = 100 * 1024 * 1024
TIK = 0.1
def limit_virtual_memory():
# The tuple below is of the form (soft limit, hard limit).
# When the limit cannot be changed, setrlimit() raises ValueError.
# soft, hard = resource.getrlimit(resource.RLIMIT_AS)
# resource.setrlimit(resource.RLIMIT_AS, (MAX_PRC_VIRT_MEM, resource.RLIM_INFINITY))
# resource.setrlimit(resource.RLIMIT_AS, (MAX_PRC_VIRT_MEM, MAX_PRC_VIRT_MEM))
resource.setrlimit(resource.RLIMIT_DATA, (MAX_PRC_VIRT_MEM, MAX_PRC_VIRT_MEM))
if not platform.uname().system == "Darwin":
resource.setrlimit(
resource.RLIMIT_STACK, (MAX_PRC_STACK_MEM, MAX_PRC_STACK_MEM)
)
def set_nonblocking(reader):
fd = reader.fileno()
fl = fcntl.fcntl(fd, fcntl.F_GETFL)
fcntl.fcntl(fd, fcntl.F_SETFL, fl | os.O_NONBLOCK)
class TestRustEvaluation:
"""Test class for Rust evaluation functions"""
def test_simple_stdin_program(self):
"""Test successful execution"""
program = """
use std::io;
fn read_and_display_variables() {
let mut input1 = String::new();
io::stdin()
.read_line(&mut input1)
.expect("Error");
let first_variable = input1.trim();
let mut input2 = String::new();
io::stdin()
.read_line(&mut input2)
.expect("Error");
let second_variable = input2.trim();
println!("{}", first_variable);
println!("{}", second_variable);
}
fn main() {
read_and_display_variables();
}
"""
cwd = "/home/jovyan/ipetrov/bench/LiveCodeBench-mult/tests/assets"
code_path = os.path.join(cwd, "main.rs")
exec_name = os.path.join(cwd, "main")
args = ["rustc", str(code_path),"-o", exec_name]
new_env = {}
new_env['PATH'] = os.environ['PATH']
new_env['GCC'] = os.environ['GCC']
new_env['CXX'] = os.environ['CXX']
new_env['CXXFLAGS'] = os.environ['CXXFLAGS']
new_env['LD_LIBRARY_PATH'] = os.environ['LD_LIBRARY_PATH']
new_env['LD'] = os.environ['LD']
new_env['RUST_BACKTRACE'] = "1"
new_env['RUST_LIB_BACKTRACE'] = "1"
print(platform.uname())
p = subprocess.Popen(
args,
env=new_env,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
preexec_fn=limit_virtual_memory
# start_new_session=True,
# # increase bufsize to fit estimated output size
# bufsize=4 * bufsize,
# cwd=cwd,
# preexec_fn=limit_virtual_memory if limit_memory else None,
)
set_nonblocking(p.stdout)
set_nonblocking(p.stderr)
input_data = []
stdout, stderr = p.communicate(input=input_data, timeout=TIMEOUT)
try:
stdout = stdout.decode("utf-8")
stderr = stderr.decode("utf-8")
except:
pass
print(stdout)
print(stderr)
assert not stderr
if __name__ == "__main__":
ss = TestRustEvaluation()
ss.test_simple_stdin_program()
</code></pre>
<p>This is the error that I'm getting:</p>
<pre><code>error: linking with `cc` failed: exit status: 1
|
= note: "cc" "-m64" "/tmp/rustcwSISL9/symbols.o" "<2 object files omitted>" "-Wl,--as-needed" "-Wl,-Bstatic" "<sysroot>/lib/rustlib/x86_64-unknown-linux-gnu/lib/{libstd-*,libpanic_unwind-*,libobject-*,libmemchr-*,libaddr2line-*,libgimli-*,librustc_demangle-*,libstd_detect-*,libhashbrown-*,librustc_std_workspace_alloc-*,libminiz_oxide-*,libadler2-*,libunwind-*,libcfg_if-*,liblibc-*,librustc_std_workspace_core-*,liballoc-*,libcore-*,libcompiler_builtins-*}.rlib" "-Wl,-Bdynamic" "-lgcc_s" "-lutil" "-lrt" "-lpthread" "-lm" "-ldl" "-lc" "-L" "/tmp/rustcwSISL9/raw-dylibs" "-B<sysroot>/lib/rustlib/x86_64-unknown-linux-gnu/bin/gcc-ld" "-fuse-ld=lld" "-Wl,--eh-frame-hdr" "-Wl,-z,noexecstack" "-L" "<sysroot>/lib/rustlib/x86_64-unknown-linux-gnu/lib" "-o" "/home/user/scripts/tests/assets/main" "-Wl,--gc-sections" "-pie" "-Wl,-z,relro,-z,now" "-nodefaultlibs"
= note: some arguments are omitted. use `--verbose` to show all linker arguments
= note: terminate called after throwing an instance of 'std::system_error'
what(): Resource temporarily unavailable
PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace.
Stack dump:
0. Program arguments: /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/bin/rust-lld -flavor gnu -plugin /home/conda/envs/python311/bin/../libexec/gcc/x86_64-conda-linux-gnu/13.3.0/liblto_plugin.so -plugin-opt=/home/conda/envs/python311/bin/../libexec/gcc/x86_64-conda-linux-gnu/13.3.0/lto-wrapper -plugin-opt=-fresolution=/tmp/cczHPv10.res --sysroot=/home/conda/envs/python311/bin/../x86_64-conda-linux-gnu/sysroot --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -pie -o /home/user/scripts/tests/assets/main /home/conda/envs/python311/bin/../x86_64-conda-linux-gnu/sysroot/usr/lib/../lib/Scrt1.o /home/conda/envs/python311/bin/../x86_64-conda-linux-gnu/sysroot/usr/lib/../lib/crti.o /home/conda/envs/python311/bin/../lib/gcc/x86_64-conda-linux-gnu/13.3.0/crtbeginS.o -L/tmp/rustcwSISL9/raw-dylibs -L/home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib -L/home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/bin/gcc-ld -L/home/conda/envs/python311/bin/../lib/gcc/x86_64-conda-linux-gnu/13.3.0 -L/home/conda/envs/python311/bin/../lib/gcc -L/home/conda/envs/python311/bin/../lib/gcc/x86_64-conda-linux-gnu/13.3.0/../../../../x86_64-conda-linux-gnu/lib/../lib -L/home/conda/envs/python311/bin/../lib/gcc/x86_64-conda-linux-gnu/13.3.0/../../../../lib -L/home/conda/envs/python311/bin/../x86_64-conda-linux-gnu/sysroot/lib/../lib -L/home/conda/envs/python311/bin/../x86_64-conda-linux-gnu/sysroot/usr/lib/../lib -L/home/conda/envs/python311/bin/../lib/gcc/x86_64-conda-linux-gnu/13.3.0/../../../../x86_64-conda-linux-gnu/lib -L/home/conda/envs/python311/bin/../lib/gcc/x86_64-conda-linux-gnu/13.3.0/../../.. -L/home/conda/envs/python311/bin/../x86_64-conda-linux-gnu/sysroot/lib -L/home/conda/envs/python311/bin/../x86_64-conda-linux-gnu/sysroot/usr/lib /tmp/rustcwSISL9/symbols.o /home/user/scripts/tests/assets/main.main.dc1f31da4ed3d656-cgu.0.rcgu.o /home/user/scripts/tests/assets/main.3cbxwqh24x4onufg206w31soo.rcgu.o --as-needed -Bstatic /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd-210854cf1daa4bec.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libpanic_unwind-60999f71969c62e4.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libobject-c8ba02a03a2c8826.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libmemchr-8fad20e6ec27261f.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libaddr2line-4762c639c5141640.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libgimli-68de3fb7a86859f3.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_demangle-4f7fbb1f095fcd3f.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libstd_detect-6a40cf5258f51f80.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libhashbrown-eac3a09b6db64fd7.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_std_workspace_alloc-9764c127deb56b65.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libminiz_oxide-792c79271e16ea9f.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libadler2-ae8f2d91738556a4.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libunwind-8777a826d833fed1.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcfg_if-6b0ee99d0cd367aa.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/liblibc-25b9095ba4a91f19.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/librustc_std_workspace_core-9eb003a32f0992fa.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/liballoc-11a30f34fef80118.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcore-049beac0cc29bd50.rlib /home/conda/envs/python311/lib/rustlib/x86_64-unknown-linux-gnu/lib/libcompiler_builtins-8e6617d0a0e102d3.rlib -Bdynamic -lgcc_s -lutil -lrt -lpthread -lm -ldl -lc --eh-frame-hdr -z noexecstack --gc-sections -z relro -z now /home/conda/envs/python311/bin/../lib/gcc/x86_64-conda-linux-gnu/13.3.0/crtendS.o /home/conda/envs/python311/bin/../x86_64-conda-linux-gnu/sysroot/usr/lib/../lib/crtn.o -rpath /home/conda/envs/python311/lib
Stack dump without symbol names (ensure you have llvm-symbolizer in your PATH or set the environment var `LLVM_SYMBOLIZER_PATH` to point to it):
0 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e951367 llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) + 39
1 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e951783
2 libc.so.6 0x00007f9d79019520
3 libc.so.6 0x00007f9d7906d9fc pthread_kill + 300
4 libc.so.6 0x00007f9d79019476 raise + 22
5 libc.so.6 0x00007f9d78fff7f3 abort + 211
6 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e7fd6d5 __gnu_cxx::__verbose_terminate_handler() + 245
7 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e7fce36 __cxxabiv1::__terminate(void (*)()) + 6
8 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e7fcea1
9 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e7fcff5
10 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e5af3d5 std::__throw_system_error(int) + 129
11 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e87b869
12 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e90012f
13 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e8ffc7c
14 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e8ffde5 llvm::parallel::TaskGroup::spawn(std::function<void ()>) + 53
15 libLLVM.so.20.1-rust-1.90.0-stable 0x00007f9d7e8fff88 llvm::parallelFor(unsigned long, unsigned long, llvm::function_ref<void (unsigned long)>) + 200
16 rust-lld 0x000055e5da0be476
17 rust-lld 0x000055e5da0a15fa
18 rust-lld 0x000055e5da09ca14
19 rust-lld 0x000055e5d9fac3b6
20 rust-lld 0x000055e5d9fab400
21 rust-lld 0x000055e5d9fabab1
22 libc.so.6 0x00007f9d79000d90
23 libc.so.6 0x00007f9d79000e40 __libc_start_main + 128
24 rust-lld 0x000055e5d9e2e069
collect2: fatal error: ld terminated with signal 6 [Aborted], core dumped
compilation terminated.
error: aborting due to 1 previous error
Traceback (most recent call last):
File "/home/user/scripts/tests/tests_plangs/test_rust2.py", line 127, in <module>
ss.test_simple_stdin_program()
File "/home/user/scripts/tests/tests_plangs/test_rust2.py", line 121, in test_simple_stdin_program
assert not stderr
^^^^^^^^^^
AssertionError
</code></pre>
|
<python><rust>
|
2025-09-23 14:16:42
| 0
| 2,963
|
Johnny Cheesecutter
|
79,772,623
| 4,247,599
|
How to get the current UTC date in Python as a type of datetime.datetime
|
<p>I would like to use the Python standard library <code>datetime</code> to assign to a variable the current date, in UTC and having it of type <code>datetime.datetime</code>.</p>
<p>i.e. <code>datetime.datetime(2025, 9, 23, 0, 0, tzinfo=datetime.timezone.utc)</code></p>
<p>How can I do that?</p>
<p>Note that the built in <code>datetime.date.today()</code> does not allow for a timezone, and <code>datetime.datetime.today()</code> is not an object of type <code>datetime.date</code>, but of <code>datetime.datetime</code>, also including hour, minutes, and seconds.</p>
<pre class="lang-py prettyprint-override"><code>today = datetime.datetime.combine(
datetime.datetime.now(datetime.UTC).date(),
datetime.datetime.min.time(),
tzinfo=datetime.UTC,
)
</code></pre>
|
<python><datetime>
|
2025-09-23 12:55:18
| 2
| 4,299
|
SeF
|
79,772,560
| 6,345,518
|
Extend existing Pydantic type
|
<p>I'd like to extend existing Pydantic types such as <code>FilePath</code> by f.i. adding a file type <code>pattern</code> for</p>
<ul>
<li>validation</li>
<li>serialization to JSON schema.</li>
</ul>
<h2>Current approach</h2>
<p>I'd f.i. like to define my custom type <code>FilePathPattern</code> extending <code>FilePath</code> by a <code>pattern</code> which is <strong>defined when using the custom type</strong>:</p>
<pre><code>from typing import Annotated
from pydantic import Field, FilePath
def get_pattern(
field_data: dict[str, str]
) -> dict[str, str]:
return {"pattern", field_data["pattern"]}
FilePathPattern = Annotated[
FilePath,
Field(json_schema_extra=get_pattern),
]
class MySchema(BaseModel):
"""My schema."""
path_with_suffix: FilePathPattern = Field(
title="Path to CSV or TXT",
description="something",
default="my_data.csv",
pattern=r".*\.(csv|txt)$",
)
another_path: FilePathPattern = Field(
pattern="some_pattern",
)
</code></pre>
<h2>Expected outcome</h2>
<p>Which should</p>
<ul>
<li><p>Use the validators provided by the base type <code>FilePath</code> to check for existence</p>
</li>
<li><p>extend the validation by matching the regex pattern with <code>str(path_with_suffix)</code></p>
</li>
<li><p>serialize the model to a JSON schema with <code>MySchema.model_json_schema()</code> containing the additional key <code>pattern</code> (which is also included in the JSON schema standard):</p>
<pre><code>{
...
"properties": {
"path_with_suffix": {
"default": "my_data.csv",
"description": "something",
"format": "file-path",
"pattern": ".*/struct.dat",
"title": "Path to CSV or TXT"
},
"another_path": {...}
},
"title": "...",
"type": "object"
}
</code></pre>
</li>
</ul>
<h2>Problem</h2>
<p>Unluckily this doesn't work, since</p>
<ul>
<li><code>FilePath</code> doesn't seem to contain a <code>pattern</code>, even if defined in the <code>Field</code> of the derived custom type <code>FilePathPattern</code></li>
<li>Defining the pattern explicitly in <code>json_schema_extra={"pattern": r".*/struct\.dat"}</code> will work, but this drastically reduces reusability of the custom type</li>
<li>Using <code>WithJsonSchema</code> in the <code>Annotated</code> will result in dropping all of the other schema keywords such as <code>format</code></li>
</ul>
<p>How can I define a custom type extending existing types without overwriting any of the features provided by the existing type?</p>
|
<python><pydantic>
|
2025-09-23 11:54:22
| 1
| 5,832
|
JE_Muc
|
79,772,503
| 1,663,232
|
Azure monitor opentelemetry does not close span on chain end in Langchain
|
<p>I'm trying to setup Langchain tracing in Azure Monitor (via Application Insights) and I use the following test code</p>
<pre><code>if __name__ == "__main__":
configure_azure_monitor(connection_string=application_insights_connection_string)
OpenAIInstrumentor().instrument()
tracer = AzureAIInferenceTracer(
connection_string=application_insights_connection_string,
enable_content_recording=True,
)
prompt = ChatPromptTemplate.from_messages(
messages=[
('system', "Say hello in {input}")
])
chain = prompt | get_llm()
chain_with_callbacks=chain.with_config(callbacks=[tracer])
result=chain_with_callbacks.invoke(
{"input": "Spanish"})
print(result.content)
result=chain_with_callbacks.invoke(
{"input": "English"})
print(result.content)
result=chain_with_callbacks.invoke(
{"input": "French"})
print(result.content)
</code></pre>
<p>Azure Monitor is configured correcty and I can see my traces.
I expect to see 3 traces on the same level but I see a tree with 3 levels (second invoke is a child of the first and the third invoke is a child of the second)</p>
<p><a href="https://i.sstatic.net/Dwscvv4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dwscvv4E.png" alt="Screenshot from Azure" /></a></p>
<p>It seems that it does not close span on a chain end</p>
<p>Just for comparison:
The same code in Langsmith generate correct traces:
<a href="https://i.sstatic.net/266BJNkM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/266BJNkM.png" alt="enter image description here" /></a></p>
<p>Do you have any idea what could be a problem here?</p>
<p>P.S. All package versions are fresh</p>
|
<python><azure><langchain><open-telemetry>
|
2025-09-23 10:56:45
| 0
| 1,371
|
Sergey
|
79,772,437
| 2,272,386
|
xsdata dataclass generation from XSD with choice fields
|
<p>How is it possible using <strong>xsdata</strong> for the generation of the model (dataclass) from an XSD with choice elements, to perform validations on those choices, so that it only allows one of them to be set?</p>
<p>XSD fragment</p>
<pre><code><xs:complexType name="ThirdPartyType">
<xs:choice>
<xs:element name="LegalEntity" type="LegalEntityType"/>
<xs:element name="Individual" type="IndividualType"/>
</xs:choice>
</xs:complexType>
</code></pre>
<p>I have searched the web and found that this requirement could be validated using the <code>__post_init__</code> method; but would it be possible to add some hook to xsdata, so that it automatically includes the implementation of the <code>__post_init__</code> method when generating the model with xsdata?</p>
<pre><code>@dataclass
class ThirdPartyType:
legal_entity: Optional[LegalEntityType] = field(
default=None,
metadata={
"name": "LegalEntity",
"type": "Element",
"namespace": "",
},
)
individual: Optional[IndividualType] = field(
default=None,
metadata={
"name": "Individual",
"type": "Element",
"namespace": "",
},
)
def __post_init__(self):
filled = sum(v is not None for v in [self.legal_entity, self.individual])
if filled > 1:
raise ValueError("Only one of ['legal_entity', 'individual'] can be defined")
</code></pre>
|
<python><xsd><python-dataclasses>
|
2025-09-23 09:43:53
| 0
| 727
|
Luis Daniel
|
79,772,386
| 12,158,757
|
How to clear output from all cells?
|
<p>With iPython, is there any way to clear output from all cells? I am looking for a command that can do it in one shot.</p>
<p>I see that <code>clear_out</code> like below can do the cleaning work, but it works for one single cell, not all:</p>
<pre class="lang-python prettyprint-override"><code>from IPython.display import clear_output
clear_output(wait=True)
</code></pre>
|
<python><jupyter-notebook><ipython>
|
2025-09-23 08:43:47
| 1
| 105,741
|
ThomasIsCoding
|
79,772,181
| 1,719,931
|
Show progress bar when reading files with globbing with polars
|
<p>I have a folder with multiple Excel files.</p>
<p>I'm reading all of them in a single polars DataFrame concatenated vertically using globbing:</p>
<pre><code>import polars as pl
df = pl.read_excel("folder/*.xlsx")
</code></pre>
<p>How can I have a progress bar that tracks files that are read?</p>
|
<python><progress-bar><python-polars><polars>
|
2025-09-23 03:18:31
| 1
| 5,202
|
robertspierre
|
79,772,091
| 1,360,544
|
OSX does not display the names of free-threaded python threads
|
<p>I am running free-threaded python3.13t with PYTHON_GIL=0 on MacOS Sequoia 15.7.</p>
<p>I create threads like this:</p>
<pre><code>thread = threading.Thread(target=lambda: time.sleep(10), name="test-thread")
thread.start()
</code></pre>
<p>But their names would not be displayed by OS (note the second line, corresponding to the thread, is empty):</p>
<pre><code>$ ps -M 87266
USER PID TT %CPU STAT PRI STIME UTIME COMMAND
burkov 87266 s032 0.0 S 31T 0:00.24 0:00.87 /Library/Frameworks/PythonT.framework/Versions/3.
87266 0.0 S 31T 0:00.00 0:00.00
</code></pre>
<p>I try to set names manually with pthreads <code>pthread_setname_np</code> call:</p>
<pre><code>def set_thread_name(thread: threading.Thread, name: str):
"""Set the OS thread name using pthread_setname_np
Should be called after thread.start()!
"""
try:
libpthread_path = ctypes.util.find_library("pthread")
if libpthread_path:
libpthread = ctypes.CDLL(libpthread_path)
if hasattr(libpthread, "pthread_setname_np"):
pthread_setname_np = libpthread.pthread_setname_np
pthread_setname_np.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
pthread_setname_np.restype = ctypes.c_int
# Get current thread ID
thread_id = thread.ident
pthread_setname_np(thread_id, name.encode('ascii', 'replace')[:15])
except Exception:
pass # Ignore failures
thread = threading.Thread(target=lambda: time.sleep(10), name="test-thread")
thread.start()
set_thread_name(thread, "test-thread")
</code></pre>
<p>Again, to no avail. Can you suggest, how to fix this?</p>
|
<python><multithreading><python-multithreading><gil><python-3.13>
|
2025-09-22 21:54:34
| 0
| 14,716
|
Boris Burkov
|
79,771,953
| 10,902,944
|
How to properly use joblib files in Dask?
|
<pre><code>from joblib import load
ntrees_16_model = load(r"ntrees_quantile_16_model_watermask.joblib")
ntrees_50_model = load(r"ntrees_quantile_50_model_watermask.joblib")
ntrees_84_model = load(r"ntrees_quantile_84_model_watermask.joblib")
</code></pre>
<p>I am using xarray's <code>xr.map_blocks()</code> to run a function on my data frame, using these joblib files above. When I look at the Dask dashboard, I am seeing that my function is only being processed on one worker, despite having multiple workers that can do processing. I can supply my function, although no one would be able to replicate my issue with this code without supplying all of the input data needed.</p>
<pre><code>import numpy as np
import pandas as pd
def generate_treelist(pixel_df, ntrees_50_model, ntrees_16_model, ntrees_84_model, pixel_resolution):
try:
pixel_df = pixel_df.rename(columns={'X': 'pixel_x'})
pixel_df = pixel_df.rename(columns={'Y': 'pixel_y'})
# Drop rows containing NaN values
pixel_df = pixel_df.dropna()
if pixel_df.isnull().values.any():
print(f"[ERROR] NaNs detected in pixel_df after dropping NaN rows.")
return None
# Define the sampleDraws function
pixel_df = pixel_df.reset_index(drop=True)
def sampleDraws(df, num_pulls):
try:
pull_cols = ['pull_' + str(i+1) for i in range(num_pulls)]
random_samples = np.random.normal(
df['mu'].values[:, np.newaxis], df['sd'].values[:, np.newaxis], (len(df), num_pulls))
samples_df = pd.DataFrame(
random_samples, columns=pull_cols, index=df.index)
out_df = pd.concat([df, samples_df], axis=1)
return out_df
except Exception as e:
print(f"[ERROR] Exception occurred in sampleDraws: {e}")
return None
# Main treelist generation code starts here
pixel_coords = pixel_df[["pixel_x", "pixel_y"]]
pixel_df = pixel_df.drop(
columns=['pixel_x', 'pixel_y', 'time'])
desired_columns = [
'tmmx_1_mean', 'tmmn_1_mean', 'pr_1_mean', 'vs_1_mean', 'rmin_1_mean',
'rmax_1_mean', 'etr_1_mean', 'tmmx_5_mean', 'tmmn_5_mean', 'pr_5_mean',
'vs_5_mean', 'rmin_5_mean', 'rmax_5_mean', 'etr_5_mean', 'tmmx_10_mean',
'tmmn_10_mean', 'pr_10_mean', 'vs_10_mean', 'rmin_10_mean', 'rmax_10_mean',
'etr_10_mean', 'tmmx_20_mean', 'tmmn_20_mean', 'pr_20_mean', 'vs_20_mean',
'rmin_20_mean', 'rmax_20_mean', 'etr_20_mean', 'tmmx_30_mean', 'tmmn_30_mean',
'pr_30_mean', 'vs_30_mean', 'rmin_30_mean', 'rmax_30_mean', 'etr_30_mean', 'water_mask',
'ls_blue', 'ls_green', 'ls_red', 'ls_nir', 'ls_swir1', 'ls_swir2',
'mean_num_of_fires', 'mode_years_since_fire'
]
pixel_df = pixel_df[desired_columns]
ntrees_50_pred = ntrees_50_model.predict(pixel_df)
ntrees_16_pred = ntrees_16_model.predict(pixel_df)
ntrees_84_pred = ntrees_84_model.predict(pixel_df)
if np.isnan(ntrees_50_pred).any() or np.isnan(ntrees_16_pred).any() or np.isnan(ntrees_84_pred).any():
print(f"[ERROR] NaNs detected in number of trees prediction.")
return None
# Store number of trees' mean and standard deviation for uncertainty
ntrees_mu = ntrees_50_pred
ntrees_sd = abs(ntrees_84_pred - ntrees_16_pred) / 2
df = pd.DataFrame({'mu': ntrees_mu, 'sd': ntrees_sd})
print(f"[INFO] Sampling number of trees...")
random_ntrees = sampleDraws(df, num_pulls=1)
if random_ntrees is None:
return None
random_ntrees_clean = random_ntrees.drop(['mu', 'sd'], axis=1)
unscaled_round_ntrees = np.round(random_ntrees_clean).astype(int)
random_ntrees_clean[random_ntrees_clean < 0] = 0
current_ntrees = unscaled_round_ntrees.iloc[:, 0]
current_ntrees[current_ntrees < 0] = 0
pixel_df = pd.concat([pixel_df, current_ntrees], axis=1)
random_coords = np.random.uniform(-pixel_resolution / 2,
pixel_resolution / 2, (len(random_ntrees_clean), 2))
repeated_pixel_coords = pixel_coords.values.repeat(
current_ntrees, axis=0)
tree_coords = repeated_pixel_coords + random_coords
# Repeat pixel-level variables to match the length of treelist
ntrees_mu_repeated = np.repeat(ntrees_mu, current_ntrees)
ntrees_sd_repeated = np.repeat(ntrees_sd, current_ntrees)
# Construct final treelist DataFrame with uncertainty columns
treelist = pd.DataFrame({
'pixel_x': repeated_pixel_coords[:, 0],
'pixel_y': repeated_pixel_coords[:, 1],
'tree_x': tree_coords[:, 0],
'tree_y': tree_coords[:, 1],
'n_trees_mu': ntrees_mu_repeated,
'n_trees_sd': ntrees_sd_repeated
})
print(treelist)
return treelist
except Exception as e:
print(f"[ERROR] Exception occurred during treelist generation: {e}")
return None
</code></pre>
<p>Only thing I can think of is there is a "proper" way to load the joblib files to the Dask cluster as it is currently being loaded just in my client. Am I on the right track with my thinking?</p>
<p>IMPORTANT EDIT: I realize now that I didn't provide context on the data frame part. I have a wrapper function that converts the xarray into a data frame, then it runs <code>generate_treelist()</code> within this wrapper, and then converts the resulting data frame back into an xarray</p>
|
<python><dask><python-xarray><joblib>
|
2025-09-22 18:40:10
| 0
| 397
|
Adriano Matos
|
79,771,920
| 1,747,834
|
Trouble building C++ extension for Python-3.12
|
<p>The <code>setup.py</code> works fine with Python-3.6 on RHEL7:
<code>python3 setup.py build && python3 setup.py install --user</code></p>
<p>Trying to do the same on RHEL8 with Python 3.12 (and g++ 8.5.0), however, I get an error from <code>g++</code>:</p>
<pre class="lang-none prettyprint-override"><code>In file included from /usr/include/c++/8/x86_64-redhat-linux/bits/os_defines.h:39,
from /usr/include/c++/8/x86_64-redhat-linux/bits/c++config.h:2470,
from /usr/include/c++/8/utility:68,
from /usr/include/c++/8/algorithm:60,
from /home/me/project/mySource.cpp:1:
/usr/include/features.h:381:4: warning: #warning _FORTIFY_SOURCE requires compiling with optimization (-O) [-Wcpp]
# warning _FORTIFY_SOURCE requires compiling with optimization (-O)
^~~~~~~
In file included from /usr/include/c++/8/bits/stl_algo.h:59,
from /usr/include/c++/8/algorithm:62,
from /home/me/project/mySource.cpp:1:
/usr/include/c++/8/cstdlib:75:15: fatal error: stdlib.h: No such file or directory
#include_next <stdlib.h>
^~~~~~~~~~
compilation terminated.
error: command '/usr/bin/g++' failed with exit code 1
</code></pre>
<p>Now, the <code>/usr/include/stdlib.h</code> and <code>/usr/include/c++/8/stdlib.h</code> are both present, installed by glibc-headers and libstdc++-devel respectively. Moreover, the same C++ source file compiles fine, if I invoke compiler by hand -- with the flags, which <em>I think</em> should be there on command-line.</p>
<p>Using <code>strace</code> I was able to discern, that the compiler checks the following locations before reporting the error:</p>
<pre class="lang-none prettyprint-override"><code>2396267 lstat("/usr/include/c++/8/x86_64-redhat-linux/stdlib.h", 0x7fff0704f4c0) = -1 ENOENT (No such file or directory)
2396267 openat(AT_FDCWD, "/usr/lib/gcc/x86_64-redhat-linux/8/../../../../include/c++/8/x86_64-redhat-linux/stdlib.h", O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory)
2396267 lstat("/usr/include/c++/8/backward/stdlib.h", 0x7fff0704f4c0) = -1 ENOENT (No such file or directory)
2396267 openat(AT_FDCWD, "/usr/lib/gcc/x86_64-redhat-linux/8/../../../../include/c++/8/backward/stdlib.h", O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory)
2396267 lstat("/usr/lib/gcc/x86_64-redhat-linux/8/include/stdlib.h", 0x7fff0704f4c0) = -1 ENOENT (No such file or directory)
2396267 openat(AT_FDCWD, "/usr/lib/gcc/x86_64-redhat-linux/8/include/stdlib.h", O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory)
2396267 lstat("/usr/local/include/stdlib.h", 0x7fff0704f4c0) = -1 ENOENT (No such file or directory)
2396267 openat(AT_FDCWD, "/usr/local/include/stdlib.h", O_RDONLY|O_NOCTTY) = -1 ENOENT (No such file or directory)
</code></pre>
<p>The same <code>strace</code> shows the complete <code>g++</code> command-line as:</p>
<pre class="lang-json prettyprint-override"><code>[
"/usr/bin/g++",
"-pthread",
"-fno-strict-overflow",
"-Wsign-compare",
"-DDYNAMIC_ANNOTATIONS_ENABLED=1",
"-DNDEBUG",
"-O2",
"-g",
"-pipe",
"-Wall",
"-Werror=format-security",
"-Wp,-D_FORTIFY_SOURCE=2",
"-Wp,-D_GLIBCXX_ASSERTIONS",
"-fexceptions",
"-fstack-protector-strong",
"-grecord-gcc-switches",
"-m64",
"-mtune=generic",
"-fasynchronous-unwind-tables",
"-fstack-clash-protection",
"-fcf-protection",
"-O2",
"-g",
"-pipe",
"-Wall",
"-Werror=format-security",
"-Wp,-D_FORTIFY_SOURCE=2",
"-Wp,-D_GLIBCXX_ASSERTIONS",
"-fexceptions",
"-fstack-protector-strong",
"-grecord-gcc-switches",
"-m64",
"-mtune=generic",
"-fasynchronous-unwind-tables",
"-fstack-clash-protection",
"-fcf-protection",
"-O2",
"-g",
"-pipe",
"-Wall",
"-Werror=format-security",
"-Wp,-D_FORTIFY_SOURCE=2",
"-Wp,-D_GLIBCXX_ASSERTIONS",
"-fexceptions",
"-fstack-protector-strong",
"-grecord-gcc-switches",
"-m64",
"-mtune=generic",
"-fasynchronous-unwind-tables",
"-fstack-clash-protection",
"-fcf-protection",
"-fPIC",
"-DBOOST_LOG_DYN_LINK=1",
"-I/home/me/project",
"-I/usr/include",
"-I/usr/include/python3.12",
"-c",
"/home/me/project/mySource.cpp",
"-o",
"build/temp.linux-x86_64-cpython-312/..../mySource.o",
"-isystem",
"/home/me/project/headers_linux",
"-O0",
"-g",
"-ggdb3",
"-gdwarf-2",
"-Wno-unknown-pragmas",
"-Wno-invalid-offsetof",
"-isystem",
"/usr/include"
]
</code></pre>
<p>I don't see anything there, that should be preventing the <code>stdlib.h</code> from being found in the standard location, yet found it is not...</p>
<p>What is happening?</p>
|
<python><g++><rhel8>
|
2025-09-22 17:54:14
| 0
| 4,246
|
Mikhail T.
|
79,771,903
| 1,620,696
|
Alembic migrations autogenerate with Docker Compose workflow
|
<p>I am developing a FastAPI project and using SQLAlchemy and Alembic. I'm running everything in Docker containers with Docker Compose. Now, I'm worried about something. I would like to take advantage of the <code>alembic revision --autogenerate</code> command, but it <em>requires</em> a connection to the database, obviously, as it compares to the current schema.</p>
<p>I came up with the following solution, which I'm unhappy with:</p>
<ol>
<li><p>I spin up <em>just</em> the database container <code>docker-compose up -d database</code></p>
</li>
<li><p>Then I put a connection string on a .env file inside the particular FastAPI service I'm working on</p>
</li>
<li><p>I use <code>load_dotenv()</code> on the <code>env.py</code> file to set the <code>DATABASE_URL</code> environment variable which is used to connect to the dtabase</p>
</li>
<li><p>Now I can do <code>alembic revision --autogenerate</code></p>
</li>
</ol>
<p>It works, but I'm unhappy with it for many reasons. It doesn't feel very professional, it feels there are lots of "manual stuff" and there are things that simply don't feel natural.</p>
<p>For example: the connection string I'm manually putting on the .env file is just stupid. This is something I need <em>only</em> when coding, it has nothing to do with running the app. Second, the actual connection string to run the app is provided in the environment files from Docker Compose, it doesn't make sense to have it in an additional .env</p>
<p>What is the most professional way of dealing with this? How people deal with generating the migrations with Alembic when using Docker Compose?</p>
|
<python><docker-compose><sqlalchemy><devops><alembic>
|
2025-09-22 17:32:55
| 1
| 11,487
|
user1620696
|
79,771,657
| 8,231,936
|
How can I rename a duplicated column to make it unique in pandas?
|
<p>I have a dataframe with two duplicated columns, that can be in variable position.
In the example, COLUMN1 and COLUMN4 are optionals.</p>
<pre><code>COLUMN1 CLASIFICATION CLASIFICATION COLUMN4
</code></pre>
<p>or</p>
<pre><code>CLASIFICATION CLASIFICATION COLUMN4
</code></pre>
<p>so far this works</p>
<pre><code>archivo_df.columns.values[3] = "CLASIFICATION2"
</code></pre>
<p>but don't work when COLUMN1 is not present</p>
<p>How can I rename it to make columns name unique?
I would like to end with something like:</p>
<pre><code>COLUMN1 CLASIFICATION CLASIFICATION2 COLUMN4
</code></pre>
|
<python><pandas><dataframe>
|
2025-09-22 13:22:43
| 0
| 517
|
Cristian Avendaรฑo
|
79,771,481
| 14,236,974
|
How to extract enum from C-Header in Python
|
<p>I have an unprocessed C Header that contains a lot of comments and defines and a few enums.
Now I want to have 2 of these enums as enums in Python and extract that enum from the header file during runtime.</p>
<p>What's the best way to get that enum at runtime in Python? Regex? FFI?</p>
<p>It should be pretty robust.</p>
<p>Example enum in C:</p>
<pre><code>enum data /* This is an enum */
{
TYPE_NULL,
TYPE_ABC,
TYPE_IN,
TYPE_OUT,
TYPE_INVALID
}
</code></pre>
<p>I want to use it like this:</p>
<pre class="lang-py prettyprint-override"><code>used_type = imported_enum[2]
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>in_position = imported_enum.index("TYPE_IN")
</code></pre>
|
<python><c>
|
2025-09-22 10:06:32
| 0
| 325
|
maribox
|
79,771,178
| 6,793,603
|
Display game variation tree using compact vertical layout
|
<p>I want to create a multi-line string that can be printed out or displayed via curses. This string displays the moves and variations of a go/weiqi/baduk game from an sgf file using unicode characters. Each vertical line of stones is supposed to represent one line of moves while variations show off to the side. I could not find other code that makes a tree using this compact layout. I am hoping to include this work in a console based program to create, analyze, or interact with game records. What I have below, so far, just makes a simple tree. I am looking to make the tree look like the diagram further below. Bonus points if there can be some helper functionality for background highlighting from the root node to the current selected node. That part comes later.</p>
<p>Note for clarity. The main variation of the game record starts from the root node and follows the chain of first children. That chain should show as the left most line of nodes. Each alternative variation should show in subsequent columns.</p>
<p>Current Code:</p>
<pre class="lang-py prettyprint-override"><code># import module that converts sgf file to nodes
# https://github.com/sanderland/pysgf
# https://github.com/jtauber/sgf/blob/master/examples/ff4_ex.sgf
from pysgf import SGF
# โฏ represents a black stone played
# โฌค represents a white stone played
# โถ represents the root or other node
class SGFNode:
# Simplified node class for demonstration.
# The parse_file function from the imported module, as used
# below, will make the tree using nodes like this one.
def __init__(self, parent=None, properties=None, name=None):
self.children = []
self.properties = defaultdict(list)
self.parent = parent
if "B" in self.properties: self.name = f"โฏ "
elif "W" in self.properties: self.name = f"โฌค "
else: self.name = f"โถ "
def depth(self): ...
def nodes_in_tree(self): ...
def nodes_from_root(self): ...
def print_tree(node, last=True, header=''):
size = node.board_size
other = "โฆ" if len(node.properties) > 1 else " "
blank = " "
elbow = "โฐโ"
pipe = "โ "
tee = "โโ"
if "B" in node.properties: print(f"{header}{elbow if last else tee}โฏ ")
elif "W" in node.properties: print(f"{header}{elbow if last else tee}โฌค ")
else: print(f"{header}{elbow if last else tee}โถ ")
if len(node.children) > 0:
for i, child in enumerate(node.children):
print_tree(
node=child,
header=f"{header}{blank if last else pipe}",
last=i == len(node.children) - 1,
)
root = SGF.parse_file("./examples/ff4_ex.sgf")
print_tree(root)
</code></pre>
<p>Current Output:</p>
<pre><code>โฐโโถ
โโโฏ
โ โฐโโฌค
โ โฐโโฏ
โ โฐโโฌค
โ โฐโโฏ
โ โฐโโฌค
โ โฐโโฏ
โ โฐโโฌค
โ โฐโโฏ
โ โฐโโฌค
โ โฐโโฏ
โ โฐโโฌค
โ โฐโโฏ
โโโถ
โ โฐโโถ
โ โฐโโถ
โ โฐโโถ
โโโถ
โ โฐโโถ
โ โฐโโถ
โ โฐโโถ
โโโฏ
โ โโโฌค
โ โ โโโฏ
โ โ โโโฏ
โ โ โโโฏ
โ โ โฐโโฏ
โ โโโฌค
โ โโโฌค
โ โโโฌค
โ โโโฌค
โ โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
โฐโโฌค
โฐโโฏ
</code></pre>
<p>Desired Output:</p>
<pre><code>โถโโฌโโฌโโฌโโโโโโโโโโโโโโโโโโฎ
โฏ โถ โถ โฏโโโโโโโโฌโโฌโโฌโโฌโโฎ โฏ
โฌค โถ โถ โฌคโโฌโโฌโโฎ โฌค โฌค โฌค โฌค โฌค โฌค
โฏ โถ โถ โฏ โฏ โฏ โฏ โฏ
โฌค โถ โถ โฌค
โฏ โฏ
โฌค โฌค
โฏ โฏ
โฌค โฌค
โฏ โฏ
โฌค โฌค
โฏ โฏ
โฌค โฌค
โฏ โฏ
โฌค
โฏ
โฌค
โฏ
โฌค
โฏ
โฌค
โฏ
</code></pre>
<p>ff4_ex.sgf</p>
<pre><code>(;FF[4]AP[Primiview:3.1]GM[1]SZ[19]GN[Gametree 1: properties]US[Arno Hollosi]
(;B[pd]N[Moves, comments, annotations]
C[Nodename set to: "Moves, comments, annotations"];W[dp]GW[1]
C[Marked as "Good for White"];B[pp]GB[2]
C[Marked as "Very good for Black"];W[dc]GW[2]
C[Marked as "Very good for White"];B[pj]DM[1]
C[Marked as "Even position"];W[ci]UC[1]
C[Marked as "Unclear position"];B[jd]TE[1]
C[Marked as "Tesuji" or "Good move"];W[jp]BM[2]
C[Marked as "Very bad move"];B[gd]DO[]
C[Marked as "Doubtful move"];W[de]IT[]
C[Marked as "Interesting move"];B[jj];
C[White "Pass" move]W[];
C[Black "Pass" move]B[tt])
(;AB[dd][de][df][dg][do:gq]
AW[jd][je][jf][jg][kn:lq][pn:pq]
N[Setup]C[Black & white stones at the top are added as single stones.
Black & white stones at the bottom are added using compressed point lists.]
;AE[ep][fp][kn][lo][lq][pn:pq]
C[AddEmpty
Black stones & stones of left white group are erased in FF[3\] way.
White stones at bottom right were erased using compressed point list.]
;AB[pd]AW[pp]PL[B]C[Added two stones.
Node marked with "Black to play".];PL[W]
C[Node marked with "White to play"])
(;AB[dd][de][df][dg][dh][di][dj][nj][ni][nh][nf][ne][nd][ij][ii][ih][hq]
[gq][fq][eq][dr][ds][dq][dp][cp][bp][ap][iq][ir][is][bo][bn][an][ms][mr]
AW[pd][pe][pf][pg][ph][pi][pj][fd][fe][ff][fh][fi][fj][kh][ki][kj][os][or]
[oq][op][pp][qp][rp][sp][ro][rn][sn][nq][mq][lq][kq][kr][ks][fs][gs][gr]
[er]N[Markup]C[Position set up without compressed point lists.]
;TR[dd][de][df][ed][ee][ef][fd:ff]
MA[dh][di][dj][ej][ei][eh][fh:fj]
CR[nd][ne][nf][od][oe][of][pd:pf]
SQ[nh][ni][nj][oh][oi][oj][ph:pj]
SL[ih][ii][ij][jj][ji][jh][kh:kj]
TW[pq:ss][so][lr:ns]
TB[aq:cs][er:hs][ao]
C[Markup at top partially using compressed point lists (for markup on white stones); listed clockwise, starting at upper left:
- TR (triangle)
- CR (circle)
- SQ (square)
- SL (selected points)
- MA ('X')
Markup at bottom: black & white territory (using compressed point lists)]
;LB[dc:1][fc:2][nc:3][pc:4][dj:a][fj:b][nj:c]
[pj:d][gs:ABCDEFGH][gr:ABCDEFG][gq:ABCDEF][gp:ABCDE][go:ABCD][gn:ABC][gm:AB]
[mm:12][mn:123][mo:1234][mp:12345][mq:123456][mr:1234567][ms:12345678]
C[Label (LB property)
Top: 8 single char labels (1-4, a-d)
Bottom: Labels up to 8 char length.]
;DD[kq:os][dq:hs]
AR[aa:sc][sa:ac][aa:sa][aa:ac][cd:cj]
[gd:md][fh:ij][kj:nh]
LN[pj:pd][nf:ff][ih:fj][kh:nj]
C[Arrows, lines and dimmed points.])
(;B[qd]N[Style & text type]
C[There are hard linebreaks & soft linebreaks.
Soft linebreaks are linebreaks preceeded by '\\' like this one >o\
k<. Hard line breaks are all other linebreaks.
Soft linebreaks are converted to >nothing<, i.e. removed.
Note that linebreaks are coded differently on different systems.
Examples (>ok< shouldn't be split):
linebreak 1 "\\n": >o\
k<
linebreak 2 "\\n\\r": >o\
k<
linebreak 3 "\\r\\n": >o\
k<
linebreak 4 "\\r": >o\
k<]
(;W[dd]N[W d16]C[Variation C is better.](;B[pp]N[B q4])
(;B[dp]N[B d4])
(;B[pq]N[B q3])
(;B[oq]N[B p3])
)
(;W[dp]N[W d4])
(;W[pp]N[W q4])
(;W[cc]N[W c17])
(;W[cq]N[W c3])
(;W[qq]N[W r3])
)
(;B[qr]N[Time limits, captures & move numbers]
BL[120.0]C[Black time left: 120 sec];W[rr]
WL[300]C[White time left: 300 sec];B[rq]
BL[105.6]OB[10]C[Black time left: 105.6 sec
Black stones left (in this byo-yomi period): 10];W[qq]
WL[200]OW[2]C[White time left: 200 sec
White stones left: 2];B[sr]
BL[87.00]OB[9]C[Black time left: 87 sec
Black stones left: 9];W[qs]
WL[13.20]OW[1]C[White time left: 13.2 sec
White stones left: 1];B[rs]
C[One white stone at s2 captured];W[ps];B[pr];W[or]
MN[2]C[Set move number to 2];B[os]
C[Two white stones captured
(at q1 & r1)]
;MN[112]W[pq]C[Set move number to 112];B[sq];W[rp];B[ps]
;W[ns];B[ss];W[nr]
;B[rr];W[sp];B[qs]C[Suicide move
(all B stones get captured)])
)
(;FF[4]AP[Primiview:3.1]GM[1]SZ[19]C[Gametree 2: game-info
Game-info properties are usually stored in the root node.
If games are merged into a single game-tree, they are stored in the node\
where the game first becomes distinguishable from all other games in\
the tree.]
;B[pd]
(;PW[W. Hite]WR[6d]RO[2]RE[W+3.5]
PB[B. Lack]BR[5d]PC[London]EV[Go Congress]W[dp]
C[Game-info:
Black: B. Lack, 5d
White: W. Hite, 6d
Place: London
Event: Go Congress
Round: 2
Result: White wins by 3.5])
(;PW[T. Suji]WR[7d]RO[1]RE[W+Resign]
PB[B. Lack]BR[5d]PC[London]EV[Go Congress]W[cp]
C[Game-info:
Black: B. Lack, 5d
White: T. Suji, 7d
Place: London
Event: Go Congress
Round: 1
Result: White wins by resignation])
(;W[ep];B[pp]
(;PW[S. Abaki]WR[1d]RO[3]RE[B+63.5]
PB[B. Lack]BR[5d]PC[London]EV[Go Congress]W[ed]
C[Game-info:
Black: B. Lack, 5d
White: S. Abaki, 1d
Place: London
Event: Go Congress
Round: 3
Result: Balck wins by 63.5])
(;PW[A. Tari]WR[12k]KM[-59.5]RO[4]RE[B+R]
PB[B. Lack]BR[5d]PC[London]EV[Go Congress]W[cd]
C[Game-info:
Black: B. Lack, 5d
White: A. Tari, 12k
Place: London
Event: Go Congress
Round: 4
Komi: -59.5 points
Result: Black wins by resignation])
))
</code></pre>
<p>Edits:</p>
<ol>
<li>Fixed topology of desired output.</li>
<li>Added note to clarify intention.</li>
</ol>
|
<python><unicode><tree><nodes><display>
|
2025-09-22 02:10:51
| 1
| 315
|
Ali Kakakhel
|
79,771,046
| 145,682
|
pymongo is not inserting document with base64 string
|
<p>I have data which pymongo fails to upload, and, I cannot understand why.</p>
<p>Data available <a href="https://gist.github.com/deostroll/e53d61cb9f01496ca9f5315a78389275#file-1758480758-json" rel="nofollow noreferrer">here</a></p>
<p><strong>main2.py</strong>:</p>
<pre class="lang-py prettyprint-override"><code>from pymongo import MongoClient
import json
client = MongoClient(host='localhost')
db = client.get_database('journal')
col = db.get_collection('entries')
file = '1758480758.json'
with open(file) as f:
obj = json.load(f)
# not printing since obj is large
print(col.insert_one(obj))
</code></pre>
<p>The last print statement does not happen. And the programs waits a long time after which it reports an connection related error. (More than 15-20min). However, if a certain property is removed the problem does not happen and the insert succeeds.</p>
<p><strong>main3.py</strong>:</p>
<pre class="lang-py prettyprint-override"><code>from pymongo import MongoClient
import json
client = MongoClient(host='localhost', port=27017)
db = client.get_database('journal')
col = db.get_collection('entries')
file = '1758480758.json'
with open(file) as f:
obj = json.load(f)
del obj['bundle'][0]['data']
print(obj)
print(col.insert_one(obj))
</code></pre>
<p>In the case above, the program works as expected. It so happens that <code>data</code> in this context is a base64 encoded string. Is this behavior by design?</p>
<p>The record being inserted is not greater than 16mb in size either...</p>
<p>Edit (after 11 hours): Stumped. Tried all the above from Linux (specifically ubuntu in wsl). On pure windows this worked!</p>
<p>Edit (after 20 hours): Stumped again. A restart helped.</p>
|
<python><base64><pymongo>
|
2025-09-21 19:08:25
| 0
| 11,985
|
deostroll
|
79,770,865
| 4,408,232
|
Python: Converting string in NZ summer time to UTC gives wrong result
|
<p>I am creating a simple script where I need to convert
strings with date and time given in NZ time to corresponding
UTC time.
The time series are done in NZ summer (Dec-Feb) and hence the
NZ time is 13 hours before UTC time.
Below is a simple test with just one NZ time string to be converted:</p>
<pre class="lang-python prettyprint-override"><code>from zoneinfo import ZoneInfo
from datetime import datetime
# Date time string in NZ
NZ_TimeStr = '2024-12-16T18:55:10Z'
# Corresponding date time string in UTC
UTC_TimeStr = '2024-12-16T05:55:10Z' # NZ summertime is 13 hours ahead of UTC
# timezone unaware datetime object
NZTime = datetime.strptime(NZ_TimeStr, '%Y-%m-%dT%H:%M:%SZ')
# Set timezone to NZ
NZTime.replace(tzinfo=ZoneInfo('NZ'))
converted_NZ2UTC_Time = NZTime.astimezone(ZoneInfo('UTC'))
expectedUTCTime = datetime.strptime(UTC_TimeStr, '%Y-%m-%dT%H:%M:%SZ')
expectedUTCTime.replace(tzinfo=ZoneInfo('UTC'))
print(f'Expected: {expectedUTCTime}\nConverted: {converted_NZ2UTC_Time}')
</code></pre>
<p>And when running this script I get the result:</p>
<pre><code>Expected: 2024-12-16 05:55:10
Converted: 2024-12-16 17:55:10+00:00
</code></pre>
<p>What am I doing wrong?</p>
|
<python><datetime><timezone><zoneinfo>
|
2025-09-21 12:32:39
| 1
| 301
|
IgorLopez
|
79,770,740
| 10,127,906
|
Follow selected track in Ableton using Python MIDI Remote script
|
<p>Doing my DIY midi controller. It works well. Only problem is that user needs to go left and right trough banks and track to be able to control some particular track. So I want to focus and control track which was selected by mouse.</p>
<p>I wrote Python MIDI Remote script which send to my arduino track index, so I send from arduino to Ableton channel_left/channel_right and channel_left/channel_right commands to activate the track. But it works badly and not always makes what I need. So I want to press a button and send command to Python MIDI Remote script to do it by itself.</p>
<p>Python MIDI Remote script is not really documented well, so I still didn't find the solution. How to do it?</p>
|
<python><midi><ableton-live>
|
2025-09-21 08:07:52
| 1
| 427
|
Artem
|
79,770,466
| 5,058,384
|
Flux Kontext copy part of an image into another
|
<p>I'm trying to copy/merge/integrate items in one image into another image using Flux. I'm trying to do this with Kontext as I've been unable to understand how to do it with inpainting. I found one inpainting example (here: <a href="https://www.reddit.com/r/comfyui/comments/1ift1x2/best_way_to_inpaint_an_existing_object_person_to/" rel="nofollow noreferrer">https://www.reddit.com/r/comfyui/comments/1ift1x2/best_way_to_inpaint_an_existing_object_person_to/</a>) but its less successful than the results I've been getting with Kontext.</p>
<p>Here is my Kontext workflow:</p>
<p><a href="https://i.sstatic.net/z1laFz25.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/z1laFz25.png" alt="enter image description here" /></a></p>
<pre><code>{"id":"ad73d17c-7233-4b3e-95f9-e22a4206f7fe","revision":0,"last_node_id":256,"last_link_id":393,"nodes":[{"id":38,"type":"DualCLIPLoader","pos":[-400,210],"size":[337.76861572265625,130],"flags":{},"order":0,"mode":0,"inputs":[{"localized_name":"clip_name1","name":"clip_name1","type":"COMBO","widget":{"name":"clip_name1"},"link":null},{"localized_name":"clip_name2","name":"clip_name2","type":"COMBO","widget":{"name":"clip_name2"},"link":null},{"localized_name":"type","name":"type","type":"COMBO","widget":{"name":"type"},"link":null},{"localized_name":"device","name":"device","shape":7,"type":"COMBO","widget":{"name":"device"},"link":null}],"outputs":[{"localized_name":"CLIP","name":"CLIP","type":"CLIP","links":[59]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"DualCLIPLoader","models":[{"name":"clip_l.safetensors","url":"https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors","directory":"text_encoders"},{"name":"t5xxl_fp8_e4m3fn_scaled.safetensors","url":"https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn_scaled.safetensors","directory":"text_encoders"}],"widget_ue_connectable":{}},"widgets_values":["clip_l.safetensors","t5xxl_fp8_e4m3fn_scaled.safetensors","flux","default"]},{"id":37,"type":"UNETLoader","pos":[-400,80],"size":[337.76861572265625,82],"flags":{},"order":1,"mode":0,"inputs":[{"localized_name":"unet_name","name":"unet_name","type":"COMBO","widget":{"name":"unet_name"},"link":null},{"localized_name":"weight_dtype","name":"weight_dtype","type":"COMBO","widget":{"name":"weight_dtype"},"link":null}],"outputs":[{"localized_name":"MODEL","name":"MODEL","type":"MODEL","links":[58]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"UNETLoader","models":[{"name":"flux1-dev-kontext_fp8_scaled.safetensors","url":"https://huggingface.co/Comfy-Org/flux1-kontext-dev_ComfyUI/resolve/main/split_files/diffusion_models/flux1-dev-kontext_fp8_scaled.safetensors","directory":"diffusion_models"}],"widget_ue_connectable":{}},"widgets_values":["flux1-dev-kontext_fp8_scaled.safetensors","default"]},{"id":135,"type":"ConditioningZeroOut","pos":[220,510],"size":[230,30],"flags":{"collapsed":false},"order":11,"mode":0,"inputs":[{"localized_name":"conditioning","name":"conditioning","type":"CONDITIONING","link":237}],"outputs":[{"localized_name":"CONDITIONING","name":"CONDITIONING","type":"CONDITIONING","links":[238]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.39","Node name for S&R":"ConditioningZeroOut","widget_ue_connectable":{}},"widgets_values":[]},{"id":136,"type":"SaveImage","pos":[500,60],"size":[350,300],"flags":{},"order":28,"mode":0,"inputs":[{"localized_name":"images","name":"images","type":"IMAGE","link":340},{"localized_name":"filename_prefix","name":"filename_prefix","type":"STRING","widget":{"name":"filename_prefix"},"link":null}],"outputs":[],"properties":{"cnr_id":"comfy-core","ver":"0.3.39","Node name for S&R":"SaveImage","widget_ue_connectable":{}},"widgets_values":["flux-kontext/%date:yyyy-MM-dd%/%date:hh-mm-ss%"]},{"id":8,"type":"VAEDecode","pos":[320,660],"size":[140,46],"flags":{"collapsed":false},"order":27,"mode":0,"inputs":[{"localized_name":"samples","name":"samples","type":"LATENT","link":52},{"localized_name":"vae","name":"vae","type":"VAE","link":61}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","slot_index":0,"links":[340,342]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"VAEDecode","widget_ue_connectable":{}},"widgets_values":[]},{"id":39,"type":"VAELoader","pos":[-400,390],"size":[337.76861572265625,58],"flags":{},"order":2,"mode":0,"inputs":[{"localized_name":"vae_name","name":"vae_name","type":"COMBO","widget":{"name":"vae_name"},"link":null}],"outputs":[{"localized_name":"VAE","name":"VAE","type":"VAE","links":[61,362,391]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"VAELoader","models":[{"name":"ae.safetensors","url":"https://huggingface.co/Comfy-Org/Lumina_Image_2.0_Repackaged/resolve/main/split_files/vae/ae.safetensors","directory":"vae"}],"widget_ue_connectable":{}},"widgets_values":["ae.safetensors"]},{"id":177,"type":"ReferenceLatent","pos":[-10,390],"size":[210,46],"flags":{},"order":24,"mode":0,"inputs":[{"localized_name":"conditioning","name":"conditioning","type":"CONDITIONING","link":294},{"localized_name":"latent","name":"latent","shape":7,"type":"LATENT","link":392}],"outputs":[{"localized_name":"CONDITIONING","name":"CONDITIONING","type":"CONDITIONING","links":[292]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.41","Node name for S&R":"ReferenceLatent","widget_ue_connectable":{}},"widgets_values":[]},{"id":249,"type":"VAEEncode","pos":[-970,0],"size":[180,50],"flags":{"collapsed":false},"order":18,"mode":0,"inputs":[{"localized_name":"pixels","name":"pixels","type":"IMAGE","link":382},{"localized_name":"vae","name":"vae","type":"VAE","link":391}],"outputs":[{"localized_name":"LATENT","name":"LATENT","type":"LATENT","links":[387]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.39","Node name for S&R":"VAEEncode","widget_ue_connectable":{}},"widgets_values":[]},{"id":251,"type":"FluxKontextImageScale","pos":[-970,250],"size":[210,30],"flags":{"collapsed":false},"order":19,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":383}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[384]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"FluxKontextImageScale","widget_ue_connectable":{}},"widgets_values":[]},{"id":252,"type":"ImageToMask","pos":[-970,350],"size":[270,58],"flags":{},"order":21,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":384},{"localized_name":"channel","name":"channel","type":"COMBO","widget":{"name":"channel"},"link":null}],"outputs":[{"localized_name":"MASK","name":"MASK","type":"MASK","links":[388]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"ImageToMask","widget_ue_connectable":{}},"widgets_values":["red"]},{"id":253,"type":"MaskToImage","pos":[-1210,160],"size":[184.5833282470703,26],"flags":{},"order":10,"mode":0,"inputs":[{"localized_name":"mask","name":"mask","type":"MASK","link":390}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[385]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"MaskToImage","widget_ue_connectable":{}},"widgets_values":[]},{"id":254,"type":"Image Resize","pos":[-1220,250],"size":[230,180],"flags":{},"order":15,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":385},{"localized_name":"mode","name":"mode","type":"COMBO","widget":{"name":"mode"},"link":null},{"localized_name":"supersample","name":"supersample","type":"COMBO","widget":{"name":"supersample"},"link":null},{"localized_name":"resampling","name":"resampling","type":"COMBO","widget":{"name":"resampling"},"link":null},{"localized_name":"rescale_factor","name":"rescale_factor","type":"FLOAT","widget":{"name":"rescale_factor"},"link":null},{"localized_name":"resize_width","name":"resize_width","type":"INT","widget":{"name":"resize_width"},"link":null},{"localized_name":"resize_height","name":"resize_height","type":"INT","widget":{"name":"resize_height"},"link":null}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[383]}],"properties":{"cnr_id":"was-node-suite-comfyui","ver":"ea935d1044ae5a26efa54ebeb18fe9020af49a45","Node name for S&R":"Image Resize","widget_ue_connectable":{}},"widgets_values":["resize","true","lanczos",2,512,512]},{"id":255,"type":"FluxKontextImageScale","pos":[-980,-90],"size":[210,30],"flags":{"collapsed":false},"order":14,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":386}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[382]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"FluxKontextImageScale","widget_ue_connectable":{}},"widgets_values":[]},{"id":250,"type":"Image Resize","pos":[-1230,-90],"size":[230,180],"flags":{},"order":9,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":389},{"localized_name":"mode","name":"mode","type":"COMBO","widget":{"name":"mode"},"link":null},{"localized_name":"supersample","name":"supersample","type":"COMBO","widget":{"name":"supersample"},"link":null},{"localized_name":"resampling","name":"resampling","type":"COMBO","widget":{"name":"resampling"},"link":null},{"localized_name":"rescale_factor","name":"rescale_factor","type":"FLOAT","widget":{"name":"rescale_factor"},"link":null},{"localized_name":"resize_width","name":"resize_width","type":"INT","widget":{"name":"resize_width"},"link":null},{"localized_name":"resize_height","name":"resize_height","type":"INT","widget":{"name":"resize_height"},"link":null}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[386]}],"properties":{"cnr_id":"was-node-suite-comfyui","ver":"ea935d1044ae5a26efa54ebeb18fe9020af49a45","Node name for S&R":"Image Resize","widget_ue_connectable":{}},"widgets_values":["resize","true","lanczos",2,512,512]},{"id":6,"type":"CLIPTextEncode","pos":[-20,100],"size":[480,170],"flags":{},"order":5,"mode":0,"inputs":[{"localized_name":"clip","name":"clip","type":"CLIP","link":59},{"localized_name":"text","name":"text","type":"STRING","widget":{"name":"text"},"link":null}],"outputs":[{"localized_name":"CONDITIONING","name":"CONDITIONING","type":"CONDITIONING","slot_index":0,"links":[237,294]}],"title":"CLIP Text Encode (Positive Prompt)","properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"CLIPTextEncode","widget_ue_connectable":{}},"widgets_values":["Place the subject, the black camera, black tripod, black and white photo, into the art gallery space with the colour photograph and the white lamp. Place them against the gallery wall. Integrate the subject of the camera, tripod, black and white photograph into the space of the art gallery. The subject should not change but should be realistic in the space, have the correct lighting, shadows, scale."],"color":"#8b8a8f","bgcolor":"#77767b"},{"id":35,"type":"FluxGuidance","pos":[230,390],"size":[220,58],"flags":{"collapsed":false},"order":25,"mode":0,"inputs":[{"localized_name":"conditioning","name":"conditioning","type":"CONDITIONING","link":292},{"localized_name":"guidance","name":"guidance","type":"FLOAT","widget":{"name":"guidance"},"link":null}],"outputs":[{"localized_name":"CONDITIONING","name":"CONDITIONING","type":"CONDITIONING","slot_index":0,"links":[57]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"FluxGuidance","widget_ue_connectable":{}},"widgets_values":[7],"color":"#8b8a8f","bgcolor":"#77767b"},{"id":31,"type":"KSampler","pos":[0,660],"size":[280,262],"flags":{},"order":26,"mode":0,"inputs":[{"localized_name":"model","name":"model","type":"MODEL","link":58},{"localized_name":"positive","name":"positive","type":"CONDITIONING","link":57},{"localized_name":"negative","name":"negative","type":"CONDITIONING","link":238},{"localized_name":"latent_image","name":"latent_image","type":"LATENT","link":381},{"localized_name":"seed","name":"seed","type":"INT","widget":{"name":"seed"},"link":null},{"localized_name":"steps","name":"steps","type":"INT","widget":{"name":"steps"},"link":null},{"localized_name":"cfg","name":"cfg","type":"FLOAT","widget":{"name":"cfg"},"link":null},{"localized_name":"sampler_name","name":"sampler_name","type":"COMBO","widget":{"name":"sampler_name"},"link":null},{"localized_name":"scheduler","name":"scheduler","type":"COMBO","widget":{"name":"scheduler"},"link":null},{"localized_name":"denoise","name":"denoise","type":"FLOAT","widget":{"name":"denoise"},"link":null}],"outputs":[{"localized_name":"LATENT","name":"LATENT","type":"LATENT","slot_index":0,"links":[52]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"KSampler","widget_ue_connectable":{}},"widgets_values":[15298160231515,"randomize",35,1.5,"euler","simple",1],"color":"#8b8a8f","bgcolor":"#77767b"},{"id":217,"type":"Image Comparer (rgthree)","pos":[500,400],"size":[650,650],"flags":{},"order":29,"mode":0,"inputs":[{"dir":3,"name":"image_a","type":"IMAGE","link":342},{"dir":3,"name":"image_b","type":"IMAGE","link":374}],"outputs":[],"properties":{"cnr_id":"rgthree-comfy","ver":"944d5353a1b0a668f40844018c3dc956b95a67d7","comparer_mode":"Slide","widget_ue_connectable":{}},"widgets_values":[[{"name":"A","selected":true,"url":"/api/view?filename=rgthree.compare._temp_ibnga_00001_.png&type=temp&subfolder=&rand=0.9687730138517772"},{"name":"B","selected":true,"url":"/api/view?filename=rgthree.compare._temp_ibnga_00002_.png&type=temp&subfolder=&rand=0.9891933181984801"}]]},{"id":233,"type":"LoadImage","pos":[-1040,610],"size":[340,380],"flags":{},"order":3,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"COMBO","widget":{"name":"image"},"link":null},{"localized_name":"choose file to upload","name":"upload","type":"IMAGEUPLOAD","widget":{"name":"upload"},"link":null}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[363]},{"localized_name":"MASK","name":"MASK","type":"MASK","links":[354,364]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"LoadImage","widget_ue_connectable":{}},"widgets_values":["clipspace/clipspace-painted-masked-8072223.png [input]","image"],"color":"#8b8a8f","bgcolor":"#77767b"},{"id":238,"type":"FluxKontextImageScale","pos":[-410,610],"size":[210,30],"flags":{"collapsed":false},"order":12,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":359}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[360,374]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"FluxKontextImageScale","widget_ue_connectable":{}},"widgets_values":[]},{"id":239,"type":"VAEEncode","pos":[-400,700],"size":[180,50],"flags":{"collapsed":false},"order":16,"mode":0,"inputs":[{"localized_name":"pixels","name":"pixels","type":"IMAGE","link":360},{"localized_name":"vae","name":"vae","type":"VAE","link":362}],"outputs":[{"localized_name":"LATENT","name":"LATENT","type":"LATENT","links":[361]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.39","Node name for S&R":"VAEEncode","widget_ue_connectable":{}},"widgets_values":[]},{"id":237,"type":"Image Resize","pos":[-660,610],"size":[230,180],"flags":{},"order":6,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":363},{"localized_name":"mode","name":"mode","type":"COMBO","widget":{"name":"mode"},"link":null},{"localized_name":"supersample","name":"supersample","type":"COMBO","widget":{"name":"supersample"},"link":null},{"localized_name":"resampling","name":"resampling","type":"COMBO","widget":{"name":"resampling"},"link":null},{"localized_name":"rescale_factor","name":"rescale_factor","type":"FLOAT","widget":{"name":"rescale_factor"},"link":null},{"localized_name":"resize_width","name":"resize_width","type":"INT","widget":{"name":"resize_width"},"link":null},{"localized_name":"resize_height","name":"resize_height","type":"INT","widget":{"name":"resize_height"},"link":null}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[359]}],"properties":{"cnr_id":"was-node-suite-comfyui","ver":"ea935d1044ae5a26efa54ebeb18fe9020af49a45","Node name for S&R":"Image Resize","widget_ue_connectable":{}},"widgets_values":["resize","true","lanczos",2,512,512]},{"id":242,"type":"Image Resize","pos":[-650,940],"size":[230,180],"flags":{},"order":13,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":366},{"localized_name":"mode","name":"mode","type":"COMBO","widget":{"name":"mode"},"link":null},{"localized_name":"supersample","name":"supersample","type":"COMBO","widget":{"name":"supersample"},"link":null},{"localized_name":"resampling","name":"resampling","type":"COMBO","widget":{"name":"resampling"},"link":null},{"localized_name":"rescale_factor","name":"rescale_factor","type":"FLOAT","widget":{"name":"rescale_factor"},"link":null},{"localized_name":"resize_width","name":"resize_width","type":"INT","widget":{"name":"resize_width"},"link":null},{"localized_name":"resize_height","name":"resize_height","type":"INT","widget":{"name":"resize_height"},"link":null}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[365]}],"properties":{"cnr_id":"was-node-suite-comfyui","ver":"ea935d1044ae5a26efa54ebeb18fe9020af49a45","Node name for S&R":"Image Resize","widget_ue_connectable":{}},"widgets_values":["resize","true","lanczos",2,512,512]},{"id":241,"type":"MaskToImage","pos":[-660,850],"size":[184.5833282470703,26],"flags":{},"order":8,"mode":0,"inputs":[{"localized_name":"mask","name":"mask","type":"MASK","link":364}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[366]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"MaskToImage","widget_ue_connectable":{}},"widgets_values":[]},{"id":234,"type":"MaskPreview+","pos":[-1220,620],"size":[147.9166717529297,246.00006103515625],"flags":{},"order":7,"mode":0,"inputs":[{"localized_name":"mask","name":"mask","type":"MASK","link":354}],"outputs":[],"properties":{"cnr_id":"comfyui_essentials","ver":"9d9f4bedfc9f0321c19faf71855e228c93bd0dc9","Node name for S&R":"MaskPreview+","widget_ue_connectable":{}},"widgets_values":[]},{"id":243,"type":"FluxKontextImageScale","pos":[-410,850],"size":[210,30],"flags":{"collapsed":false},"order":17,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":365}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[367]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.38","Node name for S&R":"FluxKontextImageScale","widget_ue_connectable":{}},"widgets_values":[]},{"id":245,"type":"ImageToMask","pos":[-410,930],"size":[270,58],"flags":{},"order":20,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"IMAGE","link":367},{"localized_name":"channel","name":"channel","type":"COMBO","widget":{"name":"channel"},"link":null}],"outputs":[{"localized_name":"MASK","name":"MASK","type":"MASK","links":[368]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"ImageToMask","widget_ue_connectable":{}},"widgets_values":["red"]},{"id":236,"type":"SetLatentNoiseMask","pos":[-400,1050],"size":[180.6999969482422,46],"flags":{},"order":22,"mode":0,"inputs":[{"localized_name":"samples","name":"samples","type":"LATENT","link":361},{"localized_name":"mask","name":"mask","type":"MASK","link":368}],"outputs":[{"localized_name":"LATENT","name":"LATENT","type":"LATENT","links":[381]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"SetLatentNoiseMask","widget_ue_connectable":{}},"widgets_values":[]},{"id":256,"type":"SetLatentNoiseMask","pos":[-680,100],"size":[180.6999969482422,46],"flags":{},"order":23,"mode":0,"inputs":[{"localized_name":"samples","name":"samples","type":"LATENT","link":387},{"localized_name":"mask","name":"mask","type":"MASK","link":388}],"outputs":[{"localized_name":"LATENT","name":"LATENT","type":"LATENT","links":[392]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"SetLatentNoiseMask","widget_ue_connectable":{}},"widgets_values":[]},{"id":229,"type":"LoadImage","pos":[-1600,-90],"size":[340,380],"flags":{},"order":4,"mode":0,"inputs":[{"localized_name":"image","name":"image","type":"COMBO","widget":{"name":"image"},"link":null},{"localized_name":"choose file to upload","name":"upload","type":"IMAGEUPLOAD","widget":{"name":"upload"},"link":null}],"outputs":[{"localized_name":"IMAGE","name":"IMAGE","type":"IMAGE","links":[389]},{"localized_name":"MASK","name":"MASK","type":"MASK","links":[390]}],"properties":{"cnr_id":"comfy-core","ver":"0.3.56","Node name for S&R":"LoadImage","widget_ue_connectable":{}},"widgets_values":["clipspace/clipspace-painted-masked-6357992.png [input]","image"],"color":"#8b8a8f","bgcolor":"#77767b"}],"links":[[52,31,0,8,0,"LATENT"],[57,35,0,31,1,"CONDITIONING"],[58,37,0,31,0,"MODEL"],[59,38,0,6,0,"CLIP"],[61,39,0,8,1,"VAE"],[237,6,0,135,0,"CONDITIONING"],[238,135,0,31,2,"CONDITIONING"],[292,177,0,35,0,"CONDITIONING"],[294,6,0,177,0,"CONDITIONING"],[340,8,0,136,0,"IMAGE"],[342,8,0,217,0,"IMAGE"],[354,233,1,234,0,"MASK"],[359,237,0,238,0,"IMAGE"],[360,238,0,239,0,"IMAGE"],[361,239,0,236,0,"LATENT"],[362,39,0,239,1,"VAE"],[363,233,0,237,0,"IMAGE"],[364,233,1,241,0,"MASK"],[365,242,0,243,0,"IMAGE"],[366,241,0,242,0,"IMAGE"],[367,243,0,245,0,"IMAGE"],[368,245,0,236,1,"MASK"],[374,238,0,217,1,"IMAGE"],[381,236,0,31,3,"LATENT"],[382,255,0,249,0,"IMAGE"],[383,254,0,251,0,"IMAGE"],[384,251,0,252,0,"IMAGE"],[385,253,0,254,0,"IMAGE"],[386,250,0,255,0,"IMAGE"],[387,249,0,256,0,"LATENT"],[388,252,0,256,1,"MASK"],[389,229,0,250,0,"IMAGE"],[390,229,1,253,0,"MASK"],[391,39,0,249,1,"VAE"],[392,256,0,177,1,"LATENT"]],"groups":[{"id":1,"title":"Load models","bounding":[-410,10,360,450],"color":"#8AA","font_size":24,"flags":{}},{"id":3,"title":"Load image","bounding":[-1050,520,360,480],"color":"#8AA","font_size":24,"flags":{}},{"id":5,"title":"Prompt","bounding":[-30,10,500,270],"color":"#8AA","font_size":24,"flags":{}},{"id":6,"title":"Conditioning","bounding":[-30,300,500,250],"color":"#8AA","font_size":24,"flags":{}},{"id":7,"title":"Image/Mask Encode","bounding":[-680,520,510,480],"color":"#444","font_size":22,"flags":{}},{"id":8,"title":"Sampling and Decoding","bounding":[-20,570,500,370],"color":"#8AA","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.740024994425869,"offset":[750.4687329207507,47.55894650441739]},"frontendVersion":"1.23.4","groupNodes":{},"VHS_latentpreview":false,"VHS_latentpreviewrate":0,"VHS_MetadataImage":true,"VHS_KeepIntermediate":true,"ue_links":[],"links_added_by_ue":[]},"version":0.4}
</code></pre>
<p>It works to a point.</p>
<ol>
<li>the right area of the base image is targeted and the rest is left alone (perfect)</li>
<li>it samples the reference image (but only part of what it should)</li>
<li>it merges the part of the reference image correctly into the base image (removes background of reference image etc.) but modifies it</li>
</ol>
<p>Where am I going wrong here? Why does it not take all of what is referred to in the reference image? Why does it modify it? Do I need a mask on the reference image - does this do anything?</p>
|
<python><flux>
|
2025-09-20 17:41:43
| 0
| 966
|
garrettlynchirl
|
79,770,419
| 1,658,617
|
How do I implement a Python Protocol with one of two methods?
|
<p>For example, if I'd need to implement <code>Iterable</code> as a runtime checkable <code>typing.Protocol</code>, I'd implement one that either has <code>__iter__</code> or both <code>__len__</code> and <code>__getitem__</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Iterable[T](Protocol):
def __iter__(self) -> T:
...
# -- OR --
def __getitem__(self, index: int) -> T:
...
def __len__(self) -> int:
...
</code></pre>
<p>The only way I can think of is by using <code>ABC</code> and <code>__subclasshook__</code> much like <code>collections.abc</code>, but that will apply only for runtime and not static type checking.</p>
<p>Is there a way to implement such "at least one option" implementations?</p>
|
<python><python-typing>
|
2025-09-20 16:26:02
| 1
| 27,490
|
Bharel
|
79,770,363
| 4,503,546
|
Tiingo Python Pandas Datetime
|
<p>I am grabbing OHLC stock price data from tiingo using the following code:</p>
<p><code>df = get_tiingo_data(ticker, start_date="1990-01-01", end_date="2025-12-31")</code></p>
<p>Which calls this function:</p>
<pre><code>def get_tiingo_data(ticker, start_date, end_date):
url = f"https://api.tiingo.com/tiingo/daily/{ticker}/prices"
headers = {
"Content-Type": "application/json",
"Authorization": f"Token {API_KEY}"
}
params = {
"startDate": start_date,
"endDate": end_date
}
response = requests.get(url, headers=headers, params=params)
if response.status_code != 200:
raise Exception(f"Error fetching data: {response.status_code}, {response.text}")
data = response.json()
df = pd.DataFrame(data)
return df
</code></pre>
<p>I am then using a function to "clean" the data up including converting the date to Pandas datetime (which I am using as an index to combine data across stocks) with this code:</p>
<p><code>f1['Date'] = pd.to_datetime(f1['date'])</code></p>
<p>All of this works fine, the issue is that, in my resultant dataframe, the Datetime index output looks like this:</p>
<p>2025-09-18 00:00:00+00:00</p>
<p>Is it possible to remove the 00:00:00+00:00?</p>
<p>Many thanks.</p>
|
<python><pandas><tiingo>
|
2025-09-20 14:40:48
| 2
| 407
|
GC123
|
79,769,903
| 11,614,319
|
Python read matlab .mat file containing table
|
<p>I'm trying to read a matlab .mat file (v7.3) from python. The thing is one of the field in the .mat object is a table (7x6) with named columns, and every time I read the object I only get a 1x6 array with random numbers. I've tried with the h5py and pymatreader libraries but it didn't work.</p>
<p>How to correctly read the matlab .mat in Python?</p>
<p>I'm adding a minimal working example.</p>
<p>So from the Matlab code:</p>
<pre class="lang-matlab prettyprint-override"><code>test_table = table([1, 2]', [3, 4]', VariableNames={'a', 'b'});
save("test_table_73.mat", "test_table", "-v7.3");
</code></pre>
<p>And the python code to read it:</p>
<pre class="lang-py prettyprint-override"><code>import mat73
data_73 = mat73.loadmat("test_table_73.mat")
print(data_73)
</code></pre>
<p>The python code output</p>
<pre class="lang-none prettyprint-override"><code>{'test_table': None}
ERROR:root:ERROR: MATLAB type not supported: table, (uint32)
</code></pre>
|
<python><matlab><matlab-table>
|
2025-09-19 19:48:14
| 1
| 362
|
gee3107
|
79,769,871
| 9,705,687
|
PySide6 subclassing QSqlDatabase
|
<p>I've run into an issue with PySide6 that I cannot subclass QSqlDatabase.</p>
<p>My biggest complaint about the QtSql system is that it does not throw exceptions when things go wrong. You have to manually check if things are really open or if there are any errors. So I want to subclass parts of it to automatically throw errors for me.</p>
<p>I asked this same question 5 years ago about PyQt5. That <a href="https://stackoverflow.com/a/62182133/9705687">solution</a> (from the venerable @eyllanesc) still works in PyQt6, but not PySide6. Here's a minimal example:</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import (QApplication, QMainWindow, QTableView)
from PySide6.QtSql import (QSqlQuery, QSqlQueryModel, QSqlDatabase)
#from PyQt6.QtWidgets import (QApplication, QMainWindow, QTableView)
#from PyQt6.QtSql import (QSqlQuery, QSqlQueryModel, QSqlDatabase)
import sys
class ExceptionalDatabase(QSqlDatabase):
@staticmethod
def addDatabase(*args, **kwargs):
db = QSqlDatabase.addDatabase(*args, **kwargs)
db.__class__ = ExceptionalDatabase
return db
def open(self, user=None, pwd=None):
if user is None:
retval = super(ExceptionalDatabase, self).open()
else:
retval = super(ExceptionalDatabase, self).open(user=user, password=pwd)
if retval == False:
raise ValueError(self.lastError().text())
app = QApplication(sys.argv)
fid = open('example.db', 'w')
fid.close()
db = ExceptionalDatabase.addDatabase("QSQLITE")
db.setDatabaseName('example.db') # throws error that __init__ was not called
db.open()
if db.isOpen():
print('database opened')
else:
print('database did not open')
db.close()
sys.exit(app.exec())
</code></pre>
<p>It throws an error when I make the next call after <code>addDatabase</code>:</p>
<blockquote>
<p>RuntimeError: '<strong>init</strong>' method of object's base class (ExceptionalDatabase) not called.</p>
</blockquote>
<p>If I try to explicitly call the <code>__init__</code> function (<code>db.__init__()</code>), then it complains that I cannot call <code>__init__</code> twice.</p>
|
<python><pyside><pyside6><qsqldatabase>
|
2025-09-19 19:12:25
| 0
| 5,933
|
bfris
|
79,769,830
| 1,747,834
|
Question about Python's tp_repr implementation
|
<p>I created my own type in C++ (called "scenario"). In order to be able to send objects of this type over the network, I implemented converting it into dictionary -- using my own <code>scenarioToDict()</code> function. My <code>tp_repr</code>-method uses that same function to "represent" the data as dictionary:</p>
<pre><code>static PyObject *
scenarioToDict(PyObject *self)
{
/*
* Put all our fields into a "dictionary" object:
*/
PyObject *result = PyDict_New();
for (const auto *m = scenarioGetSet; m->name != NULL; m++) {
if (m->get == NULL)
continue;
PyDict_SetItemString(result,
m->name, m->get(self, m->closure));
}
for (const auto *m = self->ob_type->tp_members; m->name != NULL; m++) {
void *p = ((char *)self + m->offset);
switch (m->type) {
case T_DOUBLE:
PyDict_SetItemString(result, m->name,
PyFloat_FromDouble(*(double *)p));
continue;
case T_INT:
PyDict_SetItemString(result, m->name,
PyLong_FromLong(*(int *)p));
continue;
default:
PyErr_Format(PyExc_NotImplementedError,
"%s: field '%s' is of unexpected type %d.",
__func__, m->name, m->type);
result->ob_type->tp_free(result);
return NULL;
}
}
return result;
}
static PyObject *
scenarioRepr(PyObject *self)
{
PyObject *result = scenarioToDict(self);
/*
* Convert the dictionary object with all our fields
* into a string -- using its own "repr"
*/
return result->ob_type->tp_repr(result);
}
</code></pre>
<p>The program works sometimes and sometimes crashes inside <code>PyObject_Repr()</code> -- a clear indication of memory corruption :(</p>
<p>I have a suspicion, I need to increase ref-count <em>somewhere</em>, but where?</p>
|
<python><c++><refcounting>
|
2025-09-19 18:20:54
| 0
| 4,246
|
Mikhail T.
|
79,769,542
| 633,001
|
Python and multiple inheritance
|
<p>I tried to understand multiple inheritance behaviour with Python, so I tried something but I have no idea why the output of my code is what it is. (I know that diamond inheritance is bad, but was interested to see what happens.)</p>
<pre><code>class Base:
name = "Base"
def test(self):
print("I am", self.name)
class A(Base):
internal_name = "A"
def __init__(self):
self.name = "A"
super().__init__()
def test(self):
super().test()
print("Called from A")
class B(Base):
internal_name = "B"
def __init__(self):
self.name = "B"
super().__init__()
def test(self):
super().test()
print("Called from B")
class C(A, B):
def __init__(self):
self.name = "C"
A.__init__(self)
B.__init__(self)
def identify(self):
print("Internal name:", self.internal_name)
c = C()
c.test()
c.identify()
</code></pre>
<p>The output on running this code:</p>
<pre class="lang-none prettyprint-override"><code>I am B
Called from B
Called from A
Internal name: A
</code></pre>
<p>What I expected to see as output:</p>
<p><code>C()</code> calls the <code>__init__</code> of <code>C</code>, thus sets the variable name to "C", and then calls <code>A.__init__</code>, thus I thought it would override <code>self.name</code> with "A", and then call <code>Base.__init__()</code>, not doing anything. Then the same with <code>B</code>, overriding <code>self.name</code> to "B".</p>
<p>I would either expect the output to either be</p>
<pre class="lang-none prettyprint-override"><code>I am B
Called from A
I am B
Called from B
</code></pre>
<p>or just one call, but the part that "I am B" is printed <em>once</em>, but that "Called from" appears twice completely caught me unexpectedly. And then <code>internal_name</code> is "A" for some reason, even though I called <code>B.init()</code> second, and also it's only the second inherited from class. What is happening with that order?</p>
|
<python><multiple-inheritance>
|
2025-09-19 13:03:11
| 1
| 3,519
|
SinisterMJ
|
79,769,523
| 3,234,994
|
Pydantic model inserts None values in Databricks Delta table as string type instead of null type
|
<p>I have the below pydantic model with 6 columns out of which 2 columns are <code>null</code>able.</p>
<pre><code>from pydantic import BaseModel
from typing import Optional
class Purchases(BaseModel):
customer_id: int
customer_name: str
purchase_date: str
city: str
customer_nickname: Optional[str] = None
customer_address_type: Optional[str] = None
def insert_purchases(prchs: Purchases) -> None:
insert_qry = f"""
INSERT INTO cust_db.purchases(customer_id, customer_name, purchase_date, city, customer_nickname, customer_address_type)
VALUES ({prchs['customer_id']},
'{prchs['customer_name']}',
'{prchs['purchase_date']}',
'{prchs['city']}',
'{prchs['customer_nickname']}',
'{prchs['customer_address_type']}'
)"""
print(insert_qry)
spark.sql(insert_qry)
</code></pre>
<p>My input is as follows:</p>
<p><code>purchase_input = {"customer_id": 3, "customer_name": "John Doe", "purchase_date": "2011-12-27 13:04:52", "city": "New York", "customer_nickname": None, "customer_address_type": None}</code></p>
<p>I performed the <code>INSERT</code> operation using the below lines:</p>
<pre><code>prch_obj = Purchases
prch_obj.insert_purchases(purchase_input)
</code></pre>
<p>But both the columns <code>customer_nickname</code> and <code>customer_address_type</code> are getting a string type 'None' instead of <code>null</code> values:</p>
<pre><code>INSERT INTO cust_db.purchases(customer_id, customer_name, purchase_date, city, customer_nickname, customer_address_type)
VALUES (3,
'John Doe',
'2011-12-27 13:04:52',
'New York',
'None',
'None'
)
</code></pre>
<p><a href="https://i.sstatic.net/213nuM6N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/213nuM6N.png" alt="enter image description here" /></a></p>
<p>The only way I'm able to load <code>null</code> values into these fields is by using <code>CASE</code> in my <code>INSERT</code> statement for those two fields:</p>
<pre><code>def insert_purchases(prchs: Purchases) -> None:
insert_qry = f"""
INSERT INTO cust_db.purchases(customer_id, customer_name, purchase_date, city, customer_nickname, customer_address_type)
VALUES ({prchs['customer_id']},
'{prchs['customer_name']}',
'{prchs['purchase_date']}',
'{prchs['city']}',
CASE WHEN '{prchs['customer_nickname']}' = 'None' THEN NULL else '{prchs['customer_nickname']}' END,
CASE WHEN '{prchs['customer_address_type']}' = 'None' THEN NULL else '{prchs['customer_address_type']}' END
)"""
print(insert_qry)
spark.sql(insert_qry)
</code></pre>
<p>which then correctly inserts <code>null</code> values:</p>
<p><a href="https://i.sstatic.net/nNt1aKPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nNt1aKPN.png" alt="enter image description here" /></a></p>
<p>But I do have many tables where I need handle the incoming <code>None/NULL</code> type values and I think using <code>CASE</code> is an overkill when I have to do it on hundreds of fields.</p>
<p>Is there a better way to achieve this?</p>
|
<python><pyspark><databricks><pydantic>
|
2025-09-19 12:52:24
| 1
| 2,583
|
LearneR
|
79,769,501
| 1,924,194
|
pip not updating after pip.ini edit
|
<p>I edited my <code>pip.ini</code> file. The command <code>pip config debug</code> shows that the file is taken into account, but it looks like it is still stuck with the old content of the file.</p>
<p>My old config uses <em>mycompanyrepo0</em> and <em>mycompanyrepo1</em>. I try to use <em>mycompanyrepo2</em> with pip install.</p>
<p>Old C:\Users\XXXXX\pip\pip.ini:</p>
<pre><code>[global]
timeout = 60
index-url = https://artifactory.mycompanyrepo1/artifactory/api/pypi/XXX-pypi-packages/simple
trusted-host = artifactory.mycompanyrepo1
</code></pre>
<p>New C:\Users\XXXXX\pip\pip.ini:</p>
<pre><code>[global]
timeout = 60
index-url = https://artifactory.mycompanyrepo2/artifactory/api/pypi/mynewlib-pypi-packages/simple
trusted-host = artifactory.mycompanyrepo2
</code></pre>
<p>command <code>pip config debug</code>:</p>
<pre><code>(nb) C:\Users\XXXXX>pip config debug
env_var:
env:
global:
C:\ProgramData\pip\pip.ini, exists: True
global.timeout: 60
global.index: https://artifactory.mycompanyrepo1/artifactory/api/pypi/python-proxy-pypi/simple
global.index-url: https://artifactory.mycompanyrepo1/artifactory/api/pypi/python-proxy-pypi/simple
global.trusted-host: artifactory.mycompanyrepo1
site:
C:\Users\XXXXX\.conda\envs\nb\pip.ini, exists: False
user:
C:\Users\XXXXX\pip\pip.ini, exists: True
global.timeout: 60
global.index-url: https://artifactory.mycompanyrepo0/artifactory/api/pypi/python/simple
global.trusted-host: artifactory.mycompanyrepo0
C:\Users\XXXXX\AppData\Roaming\pip\pip.ini, exists: True
global.timeout: 60
global.index-url: https://artifactory.mycompanyrepo0/artifactory/api/pypi/python/simple
global.trusted-host: artifactory.mycompanyrepo0
</code></pre>
<p>pip install only tries <em>mycompanyrepo0</em> :</p>
<pre><code>(nb) C:\Users\XXXXX>pip install docreaper
Looking in indexes: https://artifactory.mycompanyrepo0/artifactory/api/pypi/python/simple
ERROR: Could not find a version that satisfies the requirement docreaper (from versions: none)
ERROR: No matching distribution found for docreaper
</code></pre>
|
<python><pip>
|
2025-09-19 12:29:45
| 0
| 903
|
Vulpo
|
79,769,419
| 18,362,468
|
OpenAI Agents SDK doesn't call tools or subagents when using Structured Outputs
|
<p>For the following example tool call doesn't work, however if I remove the <code>output_type</code> then the tool call does work.</p>
<p>Why is this the issue, and how can I fix it?</p>
<pre class="lang-py prettyprint-override"><code>@function_tool
def get_weather(city: str) -> str:
"""
Use this tool to get the weather for a city
"""
api_key=os.getenv('WEATHER_API_KEY')
url = f'http://api.weatherapi.com/v1/current.json?key={api_key}&q={city}&aqi=no'
response = requests.get(url)
weather_data = response.json()
location = weather_data.get('location')
current = weather_data.get('current')
return f'Current weather for `{location}` is `{current}'
class WeatherOutput(BaseModel):
is_good_weather: bool = Field(..., description='Is the weather good (for travel or outgoing?)')
details: str = Field(..., description='Give the detailed weather output')
async def weather_friend(agent_input: str):
agent = Agent(name="Weather Agent", instructions="You are a helpful weather assistant.", model=model, tools=[get_weather],
output_type=WeatherOutput
)
result = await Runner.run(agent, agent_input)
return result.final_output
</code></pre>
<p>I checked the trace and it does show that tool call is needed however, the tool never got called, see below:</p>
<pre><code>{
"function_call": null,
"reasoning_content": "Okay, the user is asking about the weather in London. Let me check what tools I have available. There's a function called get_weather that takes a city parameter. Since the user specified London, I should call that function with the city set to London. I don't need any other information because the city is provided. Let me generate the tool call.",
"content": "{\"is_good_weather\": true, \"details\": \"The weather in London is currently clear with a temperature of 18ยฐC.\"}",
"role": "assistant",
"tool_calls": null
}
</code></pre>
|
<python><agent><openai-agents><litellm>
|
2025-09-19 10:54:30
| 1
| 839
|
blest
|
79,769,393
| 15,560,990
|
How to pass a Python datetime to BigQuery through Avro?
|
<p>I have a Python pipeline where I try to:</p>
<ol>
<li>get some json data from an API response</li>
<li>modify it via duckdb (just in memory)</li>
<li>convert the resulting data to a list of python dicts (1 row: 1 dict)</li>
<li>Add a python datetime to every dict in the list</li>
<li>write an avro file from that dict</li>
<li>load the avro file to BigQuery</li>
</ol>
<p>But I keep having issues due to the datetime field <code>request_ts</code>.
In my BQ schema, <code>request_ts</code> is typed as <code>TIMESTAMP</code></p>
<p>My avro schema is generated by the <a href="https://github.com/jpmorganchase/py-avro-schema" rel="nofollow noreferrer">py_avro_schema</a> library via a dataclass.
In it, I was specifying that <code>request_ts</code> was of type Optional[datetime], but apparently this is wrong because then avro just maps the value to a regular string, whereas avro timestamps are longs with an additional logical type. So I manually modify my schema:</p>
<pre><code>schema_bytes = generate(MyClass, namespace="myclass")
schema_str = schema_bytes.decode("utf-8") if isinstance(schema_bytes, bytes) else schema_bytes
schema = json.loads(schema_str)
schema["fields"][0] = {
"name": "request_ts",
"type": [
"null",
{
"type": "long",
"logicalType": "timestamp-micros"
}
],
"default": None,
}
</code></pre>
<p>I've tried to generate <code>request_ts</code> as a python datetime, isoformat datetime and timestamp</p>
<pre><code># tried eeach approach individually
datetime.datetime.now(tz=datetime.timezone.utc)
datetime.datetime.now(tz=datetime.timezone.utc).isoformat()
int(datetime.datetime.now(tz=datetime.timezone.utc).timestamp())
</code></pre>
<p>But it always leads to errors in Bigquery, it is unable to parse request_ts into a timestamp.</p>
<ul>
<li>The issue with isoformat string approach is that the Avro schema then has to say that request_ts is a string, which BQ will not interpret as a timestamp if it comes from avro.</li>
<li>The issue with the timestamp is that BQ claims it is just an Int64 value (even if the schema shows the field should be <code>"type": ["null",{"type": "long","logicalType":"timestamp-millis"}]</code></li>
</ul>
<p>I've also tried to just select a timestamp in duckdb. This is typed by python as a Pandas Timestamp, which also fails because it doesn't map to an avro long.</p>
<p>So I seem to be missing something. I thought that maybe the BQ load job had to be manually configured to use avro logical types, but apparently this is not an option in the python API and simply having the logical types specified in the schema should make BQ try to parse the types properly</p>
<p>I additionally tried to recreate the table without time partitioning, in which case the job succeeds, but the timestamp is just the epoch timestamp in 1970, not current one</p>
|
<python><google-cloud-platform><google-bigquery><avro><fastavro>
|
2025-09-19 10:19:54
| 2
| 460
|
Dasph
|
79,769,347
| 1,216,183
|
Python type hint problem with mixin depending on another
|
<p>I'm struggling to have the right types in a code implying two related classes and two related mixins, which can be applied on these classes.</p>
<p>Here is the minimal code I have to demonstrate my problem and expectations.</p>
<p>Three things I need help on:</p>
<ol>
<li>The last statement <code>TestCImpl2().mixin_func2()</code> does not return a compatible type with what I'm expecting. Any idea to fix that?</li>
<li>The current code requires to repeat a generic type on the mixin: <code>class TestCImpl2(MixinOnMyBase2[MyBase2[TestCImpl1]], MyBase2[TestCImpl1])</code>. I'm pretty sure there is a solution which does not involve having to do this.</li>
<li>I currently use a Protocol, but I'm not sure it is required here.</li>
</ol>
<p>Note: I know I'm over-engineering it, but I take the occasion to go further in Python typing system.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Protocol, Self
# --- Not in my control ---
# A simple class
class Base1:
pass
# A class which require a generic type
class Base2[T]:
pass
# --- My base classes and mixins ---
# My simple base class with a custom function which return the implementation class type
class MyBase1(Base1):
def base_func1(self) -> Self:
return self
# My generic-type base class with a custom function which return the implementation class type
# I require the generic type to be based on MyBase1 and default to MyBase1 when not specified
# The class has a property of generic-type MyBase1 and a function returning implementation class type
class MyBase2[MyT: MyBase1 = MyBase1](Base2[MyT]):
object1: MyT
def base_func2(self) -> Self:
return self
# A mixin I want sometimes to apply on MyBase1 based classes
class MixinOnMyBase1:
def mixin_func1(self) -> Self:
return self
# A Protocol just to type-hint a generic type with object1 and base_func2,
# I'm not sure it is required
class MixinOnMyBase2Protocol[T](Protocol):
@property
def object1(self) -> MixinOnMyBase1: ...
def base_func2(self, *args, **kwargs) -> T: ...
# A mixin I want to sometimes to apply on MyBase2 based classes, which works by pair with MyBase1
class MixinOnMyBase2[T]:
def mixin_func2(self: MixinOnMyBase2Protocol[T]) -> T:
self.object1.mixin_func1()
return self.base_func2()
# --- My expected final implementations ---
# Test A
class TestAImpl1(MyBase1):
pass
class TestAImpl2(MyBase2):
pass
a: MyBase2 = TestAImpl2().base_func2()
# Test B
class TestBImpl1(MyBase1):
pass
class TestBImpl2(MyBase2[TestBImpl1]):
pass
b: MyBase2[TestBImpl1] = TestBImpl2().base_func2()
c: TestBImpl2 = TestBImpl2().base_func2()
# Test C
class TestCImpl1(MixinOnMyBase1, MyBase1):
pass
# I find ugly to have to repeat MyBase2[TestCImpl1] on the mixin
class TestCImpl2(MixinOnMyBase2[MyBase2[TestCImpl1]], MyBase2[TestCImpl1]):
pass
d: MyBase2[TestCImpl1] = TestCImpl2().mixin_func2()
# Here mypy complain:
# Incompatible types in assignment (expression has type "MyBase2[TestCImpl1]", variable has type "TestCImpl2")
# Mypy assignment
e: TestCImpl2 = TestCImpl2().mixin_func2()
</code></pre>
|
<python><python-typing><mypy>
|
2025-09-19 09:32:51
| 0
| 2,213
|
fabien-michel
|
79,769,295
| 14,282,714
|
AttributeError: 'DynamicCache' object has no attribute 'seen_tokens'
|
<p>I'm following the <a href="https://www.oreilly.com/library/view/hands-on-large-language/9781098150952/" rel="nofollow noreferrer"><code>Hands-On Large Language Models</code></a> book to learn more about LLMs. I'm trying to generate text using the <code>"microsoft/Phi-3-mini-4k-instruct"</code> model which is used in the book. Somehow I get an error while trying the example code:</p>
<pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-4k-instruct",
#device_map = "cuda",
torch_dtype = "auto",
trust_remote_code = True
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
# create a pipeline
generator = pipeline(
"text-generation",
model = model,
tokenizer = tokenizer,
return_full_text = False,
max_new_tokens = 500,
do_sample = False
)
# The prompt
messages = [
{"role": "user",
"content": "Create a funny joke about chickens."}
]
# Generate output
output = generator(messages)
print(output[0]["generated_text"])
</code></pre>
<p>Which returns the following error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipython-input-1474234034.py in <cell line: 0>()
6
7 # Generate output
----> 8 output = generator(messages)
9 print(output[0]["generated_text"])
8 frames
~/.cache/huggingface/modules/transformers_modules/microsoft/Phi-3-mini-4k-instruct/0a67737cc96d2554230f90338b163bc6380a2a85/modeling_phi3.py in prepare_inputs_for_generation(self, input_ids, past_key_values, attention_mask, inputs_embeds, **kwargs)
1289 if isinstance(past_key_values, Cache):
1290 cache_length = past_key_values.get_seq_length()
-> 1291 past_length = past_key_values.seen_tokens
1292 max_cache_length = past_key_values.get_max_length()
1293 else:
AttributeError: 'DynamicCache' object has no attribute 'seen_tokens'
</code></pre>
<p>The model is loading, but I don't know why this error is happening.</p>
|
<python><large-language-model><transformer-model>
|
2025-09-19 08:39:58
| 1
| 42,724
|
Quinten
|
79,769,146
| 1,581,090
|
How to fix Jupyter Lab installation on Windows 11 (using powershell)?
|
<p>On Windows 11 I had <code>jupyter lab</code> running before (on PowerShell), but now it seems to be gone. I installed and reinstalled <code>jupyter lab</code>:</p>
<pre><code>pip uninstall jupyterlab
pip install jupyterlab
</code></pre>
<p>but this does not help. The system is windows 11, python 3.10.11, pip 25.2. Running the command</p>
<pre><code>jupyter lab
</code></pre>
<p>I get</p>
<pre><code>jupyter : The term 'jupyter' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is
correct and try again.
At line:1 char:1
+ jupyter lab
+ ~~~~~~~
+ CategoryInfo : ObjectNotFound: (jupyter:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
</code></pre>
<p>The command <code>Get-Command python</code> returns</p>
<pre><code>C:\Program Files\Python310\python.exe
</code></pre>
<p>But I see that the <code>jupyter</code> stuff is installed in a folder like</p>
<pre><code>C:\Users\DALEX\AppData\Roaming\Python\Python310\Scripts
</code></pre>
<p>And even if I run</p>
<pre><code> python -m pip install jupyterlab
</code></pre>
<p>the <code>jupyter</code> stuff is installed in the same <code>AppData</code> folder.</p>
<p>I checked the environment variables and see that the path</p>
<pre><code>C:\Users\DALEX\AppData\Roaming\Python\Python310\Scripts
</code></pre>
<p>is inside PATH. But this seem to be ignore.</p>
<p><a href="https://i.sstatic.net/mLcJZGJD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLcJZGJD.png" alt="enter image description here" /></a></p>
<p>Also interestingly, the command <code>python -m jupyter --version</code> returns the following:</p>
<pre><code>Selected Jupyter core packages...
IPython : 8.37.0
ipykernel : 6.29.5
ipywidgets : not installed
jupyter_client : 8.6.3
jupyter_core : 5.8.1
jupyter_server : 2.16.0
jupyterlab : 4.4.7
nbclient : 0.10.2
nbconvert : 7.16.6
nbformat : 5.10.4
notebook : 7.4.5
qtconsole : not installed
traitlets : 5.14.3
</code></pre>
<p>so <code>python</code> "sees" the jupyter stuff. But starting it with <code>jupyter lab</code> or <code>python -m jupyter lab</code> or <code>python -m jupyter-lab</code> won't work.</p>
<p>The only way I can start it is to start it with the full path like</p>
<pre><code>C:\Users\DALEX\AppData\Roaming\Python\Python310\Scripts\jupyter-lab.exe
</code></pre>
<p>Is there a way to fix that? Or do I have to stick with the workaround?</p>
|
<python><windows><powershell><jupyter-notebook>
|
2025-09-19 05:23:46
| 1
| 45,023
|
Alex
|
79,768,990
| 667,355
|
Saving a figure with the name of passed dataframe
|
<p>I have a function that receives a pandas dataframe as an argument, plot it, and save the generated figure with the name of passed dataframe. .</p>
<p>For instance this is the function:</p>
<pre><code>def plot_function(df):
plt.figure(figsize=(8, 5))
plt.bar(df["x"], df["y"])
plt.title(f"Bar Chart")
plt.savefig(f"{str(df)}.png")
plt.show()
</code></pre>
<p>and I call <code>plot_function(test_df)</code> and I would like the function to plot the passed dataframe and save the barchart with the name of "test_df.png".</p>
|
<python><pandas><matplotlib>
|
2025-09-18 23:05:45
| 2
| 3,491
|
amiref
|
79,768,968
| 2,647,342
|
I need a Python implementation of my Longest Possible Common Subsequence (LPCS) algorithm
|
<p>Please be merciful - I've never asked a question here (I've answered a few) and am a total Python noob.</p>
<p>I developed an algorithm in SQL Server to compute the <strong>Longest <em>Possible</em> Common Subsequence (LPCS)</strong> between two strings and am struggling to rewrite it in Python.</p>
<p><strong>The Algorithm</strong></p>
<p>Consider these two strings: "DDB089A3D8014E" and "83B8FC624F774A". The LCS between the two strings is "8384", the LCS length is 4. The length of the longest <em><strong>possible</strong></em> subsequence between the two is 6. To prove this I can sort the characters in each string alphabetically and compute the LCS again: <code>LCS("00134889ABDDDE","234467788ABCFF") = "3488AB"</code></p>
<p>With <em>K</em> as the length of the alphabet, my algorithm computes the Longest <em>Possible</em> Common Subsequence in O(K) time. It's been an invaluable NLP tool in my SQL work, but now I need it in Python (Go Lang works too BTW).</p>
<p><strong>UPDATE...</strong></p>
<p>My original version of the question was closed and I have since figured out to do it in Python. The ultimate goal was to apply this my logic to three strings. Thank you to @Unmitigated for your help, this was my first Python function.</p>
<pre><code>from collections import Counter
def lpcs_optimized(string1: str, string2: str, string3: str, return_sequence: bool = False) -> tuple[int, str] | int:
"""
Compute the LPCS3 length for three strings, optimized using frequency maps.
Args:
string1 (str): First string.
string2 (str): Second string.
string3 (str): Third string.
return_sequence (bool): If True, return (length, sequence); else return length.
Returns:
int or Tuple[int, str]: LPCS3 length, or (length, sequence) if return_sequence=True.
Time Complexity: O(m + n + o), where m, n, o are string lengths.
"""
# Precompute frequency maps
freq1 = Counter(string1)
freq2 = Counter(string2)
freq3 = Counter(string3)
cs = 0
sequence = []
# Iterate over characters in first string
for char in freq1:
if char in freq2 and char in freq3:
min_freq = min(freq1[char], freq2[char], freq3[char])
cs += min_freq
if return_sequence:
sequence.extend([char] * min_freq)
if return_sequence:
return cs, ''.join(sequence)
return cs
# Example usage
if __name__ == "__main__":
s1 = "aaabcff"
s2 = "aabbceff"
s3 = "aaabbcdf"
print(f"LPCS3 length: {lpcs_optimized(s1, s2, s3)}") # Output: 5
print(f"LPCS3 sequence: {lpcs_optimized(s1, s2, s3, return_sequence=True)[1]}") # Output: aabbc
</code></pre>
|
<python><algorithm><nlp>
|
2025-09-18 22:16:51
| 1
| 8,008
|
Alan Burstein
|
79,768,902
| 13,132,728
|
Sort each row of a pandas column consisting of delimited strings
|
<h1>CONTEXT</h1>
<p>Let's say I have <code>df</code>, which consists of a column of delimited strings that I would like to sort.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'sort_me':['foo; bar','foo; bar','bar; foo']})
df
sort_me
0 foo; bar
1 foo; bar
2 bar; foo
</code></pre>
<h1>DESIRED OUTPUT</h1>
<p>Given I would like to sort these strings, this would be my desired output:</p>
<pre><code> sort_me
0 bar; foo
1 bar; foo
2 bar; foo
</code></pre>
<h1>WHAT I HAVE TRIED</h1>
<p>I figured I could turn each delimited string into a list with <code>str.split()</code>, sort that list, and then join that list back together with <code>join()</code> using a lambda function:</p>
<pre><code>df.sort_me.str.split(';').apply(sorted).apply(lambda x: ';'.join(x))
sort_me
0 bar;foo
1 bar;foo
2 foo;bar
</code></pre>
<p>I thought I was so clever for this solution, but it appears to "unsort" any column that was already coincidentally sorted (see: index 2). Why is <code>sorted</code> displaying such behavior? Is there a better way to go about this problem?</p>
|
<python><pandas><dataframe><list><sorting>
|
2025-09-18 20:47:22
| 1
| 1,645
|
bismo
|
79,768,658
| 597,234
|
Matching on a type hint
|
<p>I have to interact with an API that returns every result in binary, so I am writing a converter that takes a list of field names and types and converts each returned result to the expected type:</p>
<pre class="lang-py prettyprint-override"><code>@classmethod
def _normalize_result(cls, item: dict, column_types: Optional[dict[str, type]] = None) -> dict:
normalized_record = {}
for attrib, value in item.items():
expected_type = (column_types or {}).get(attrib, str)
match expected_type:
case list():
if attrib not in normalized_record:
normalized_record[attrib] = []
normalized_record[attrib].append(cls._unpack_single_list(value, expected_type))
case _:
normalized_record[attrib] = cls._unpack_single_list(value, expected_type)
return normalized_record
@classmethod
def _unpack_single_list(cls, item, expected_type: type = str):
if expected_type and isinstance(item, expected_type):
return item
match item, expected_type:
case list(), list():
raise NotImplementedException('list typing is not supported yet')
case list(), _:
if len(item) != 1:
raise RuntimeError(f"List can only have one item, has {len(item)}")
return cls._unpack_single_list(item[0], expected_type)
case bytes(), _:
return cls._unpack_single_list(item.decode(), expected_type)
case str(), int():
return int(item)
case str(), _:
return item
raise RuntimeError('Unexpected type')
</code></pre>
<p>Type checking warns me that <code>case str(), int():</code> will never be reached, which I know because it's a type, not an instance of the type. I am using <code>match</code> because I need to process based on the pair of <code>(current type, desired type)</code>.</p>
<p>Practically, the type will be one of (I know not all of these are implemented in the code sample):</p>
<ul>
<li><code>str</code></li>
<li><code>int</code></li>
<li><code>binary</code></li>
<li><code>Picture</code> (converted from binary)</li>
<li><code>SSLCert</code> (converted from binary or string)</li>
<li><code>list[of one of the above types]</code></li>
</ul>
<p>I have seen some answers using generics, but these are a mix of built-in types and other classes and it isn't obvious to me how generics would apply. Is there a proper type hint or some other way to handle this?</p>
|
<python><python-typing>
|
2025-09-18 15:38:11
| 1
| 2,036
|
yakatz
|
79,768,378
| 3,909,202
|
How can I plot a chemical structure with repeat units using Python?
|
<p>I would like to use Python to generate figures of the repeat units of a range of polymers, using the common notation of square brackets with a subscript "n". Here is an example from <a href="https://de.wikipedia.org/wiki/Polymer#/media/Datei:Polypropylene.svg" rel="nofollow noreferrer">Wikipedia</a>:</p>
<p><a href="https://i.sstatic.net/CTBWMTrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CTBWMTrk.png" alt="A simple structure of the repeat unit in polypropylene" /></a></p>
<p>I have found two widely distributed libraries that allow plotting chemical structures: openbabel with pybel, and rdkit. In the following, I will focus on rdkit, since it seemed to be more promissing, but, as long as it works, I would not care which library (for now ;).</p>
<p>So, here's what I tried, basic premise:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import Rectangle
from rdkit import Chem
from rdkit.Chem import Draw
def plot_stereochemistries():
"""Generate and save plots of polyisoprene stereochemistries"""
repeat_unit_14_cis = r"C\C=C(C)/C"
mol_cis = Chem.MolFromSmiles(repeat_unit_14_cis)
# Create a subplot
fig, ax = plt.subplots(1, 1, figsize=(10, 10))
# Plot the stereochemistry
if mol_cis:
img = Draw.MolToImage(mol_cis, size=(200, 200))
ax.imshow(img)
ax.set_title("Cis-1,4-polyisoprene")
ax.axis("off")
plt.tight_layout()
plt.savefig("cis-1-4-polyisoprene.png")
plt.show()
if __name__ == "__main__":
plot_stereochemistries()
</code></pre>
<p>At this point, it's clear that rdkit would have no way of knowing that it should draw repeat unit. I get this figure:</p>
<p><a href="https://i.sstatic.net/2fdZYClM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fdZYClM.png" alt="Figure of cis-1-4-isoprene" /></a></p>
<p>According to the <a href="https://www.rdkit.org/docs/RDKit_Book.html" rel="nofollow noreferrer">RDKit Book</a>, <code>polymer SGroups</code> are supported in the SMILES. However, a modification of the SMILES (<code>r"C\C=C(C)/C"</code> above) to any of the publicly available examples of sgroups in SMILES, such as from the <a href="https://docs.chemaxon.com/display/docs/formats_chemaxon-extended-smiles-and-smarts-cxsmiles-and-cxsmarts.md" rel="nofollow noreferrer">Chemaxon documentation</a>, <code>r"CC(N)C=O |Sg:gen:0::,Sg:mon:1,2,4,0,3::,SgH:1:0|"</code>, does not result in a difference in visualisation with and without sgroup.</p>
<p>The best bet currently is <code>r"*C\C=C(C)/C*"</code>, but while this hints at being a repeat unit, it is not the square brackets with subscript I would like to see.</p>
<p>I would like it to look something like this:</p>
<p><a href="https://i.sstatic.net/Ddm5yZj4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ddm5yZj4.png" alt="Figure of cis-1.4-polyisoprene" /></a></p>
<p>How can I plot a chemical structure with repeat units in square brackets with a subscript using Python?</p>
|
<python><chemistry><rdkit><openbabel>
|
2025-09-18 11:45:54
| 0
| 1,379
|
BernhardWebstudio
|
79,768,198
| 8,188,120
|
PayFast cancel subscription (sandbox mode) in Python
|
<p>I am trying to cancel a PayFast subscription (in the sandbox) using the API tools, in Python.</p>
<p>Looking at the PayFast documentation for <a href="https://developers.payfast.co.za/api#cancel-a-subscription" rel="nofollow noreferrer">recurring billing cancellations</a> it appears to be a PUT request with minimal params.</p>
<p>Here is my minimal working example (with obfuscated credentials), trying to cancel a recurring billing given known <code>merchant_id, passphrase, subscription_id</code> values:</p>
<p>[Edit: it occurred to me that the signature is probably the existing signature for the recurring billing in place, rather than one generated at runtime of this cancellation. Script updated to pass the known signature forward from PayFast's notify_url using <code>signature = data.get("signature")</code>]</p>
<pre><code>import requests
import hashlib
import datetime
import urllib.parse
# ====== HARDCODED VALUES FOR EXAMPLE ==
MERCHANT_ID = "1003XXXX"
PASSPHRASE = "XXXXXXXXXXXX"
SUBSCRIPTION_ID = "4a113e9b-4bda-4007-af87-xxxxxxxxxxxx"
SIGNATURE = "fe8d0a15218b2cab41be372a5cXXXXXX"
# ======================================
def cancel_subscription():
url = f"https://sandbox.payfast.co.za/subscriptions/{SUBSCRIPTION_ID}/cancel"
# Required PayFast params for the header
params = {
"merchant-id": MERCHANT_ID,
"version": "v1",
"timestamp": datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S"),
"signature": SIGNATURE
}
try:
response = requests.put(url, headers=params, timeout=10)
response.raise_for_status()
print("Success:", response.status_code)
print("Response JSON:", response.json())
except requests.exceptions.HTTPError as e:
print("HTTP Error:", e)
print("Response content:", response.text)
except Exception as e:
print("Other Error:", e)
if __name__ == "__main__":
cancel_subscription()
</code></pre>
<p>And I get this output error:</p>
<pre><code>HTTP Error: 419 Client Error: unknown status for url: https://sandbox.payfast.co.za/subscriptions/4a113e9b-4bda-4007-af87-xxxxxxxxxxxx/cancel
</code></pre>
<p>(Obviously without the "xxxxxxxxxxxx")</p>
<p>Unknown status is a bit worrying as it suggests a complete failure at the endpoint. I've tried workarounds such as:</p>
<ol>
<li>Using <code>/api/</code> in the url: <code>https://sandbox.payfast.co.za/api/subscriptions/{SUBSCRIPTION_ID}/cancel</code> - but this does not allow PUT operations</li>
<li>Various ChatGPT and Gemini suggestions which try DELETE requests instead of PUT and sending POST requests with HTTPBasicAuth in the headers.</li>
</ol>
<p>These have yielded responses, which is a step forward. But, I'm a little out of my depth in terms of debugging and moving away from what the documentation says is required, so I would appreciate any wisdom on whether I'm missing obvious fixes!</p>
<p>I have noticed in the past that the PayFast API docs aren't 100% inline with the necessary steps to perform successful transaction API calls, and do require some <em>playing around in the dark</em>.</p>
<hr />
<p>Extra info:</p>
<p>I know the signature generation is okay as it's worked for single payments and setting up recurring billing payments. For clarity, when successfully setting up payment requests, I have generated signatures in the same way as above with these params:</p>
<pre><code>date_today = date.today().strftime("%Y-%m-%d")
params = {
"m_payment_id": transaction_id,
"merchant_id": merchant_id,
"merchant_key": merchant_key,
"return_url": return_url,
"cancel_url": cancel_url,
"notify_url": notify_url,
"email_address": email_address,
"amount": amount,
"item_name": item_name,
"item_description": f'Purchasing of the product entitled: {item_name}, for my mobile app',
"custom_str1": item_category,
"billing_date": date_today,
}
signature = generate_signature(params, passphrase)
</code></pre>
|
<python><rest><authentication><request><payfast>
|
2025-09-18 08:56:41
| 0
| 925
|
user8188120
|
79,768,155
| 5,852,506
|
Python Flask app connected to Cassandra Astra DB RAM memory issue
|
<p>I'm running a minimal Python Flask app with one API endpoint which make a simple call to retrieve data from the a Cassandra Datastax DB inside a for loop.</p>
<pre><code># Day-2-Day Power
@app.route("/d2d_new_2/power")
def d2d_power():
data = request.args
result = get_data_op_power_d2d(data)
return JsonResponse(response=result, status=HTTP_200_OK)
</code></pre>
<p>The problem is the memory keeps increasing if I trigger this request and does not decrease after the results were returned from the function.</p>
<pre><code>Line # Mem usage Increment Occurrences Line Contents
=============================================================
104 134.4766 MiB 134.4766 MiB 1 @profile(stream=log_file)
105 def get_data_op_power_d2d(data: dict) -> pd.DataFrame:
106 134.4766 MiB 0.0000 MiB 14 my_list = [string.strip() for string in data.get("books").split(",")]
107 134.4766 MiB 0.0000 MiB 1 if data.get("as_of_op_old") == 'null' or data.get("as_of_op_new") == 'null':
108 abort(abort(HTTP_404_NOT_FOUND,
109 f'No data previous for as_of_op_old = {data.get("as_of_op_old")} '))
110
111 134.4766 MiB 0.0000 MiB 1 as_of_op_old = parser.parse(data.get("as_of_op_old"))
112 134.4766 MiB 0.0000 MiB 1 as_of_op_old = as_of_op_old.strftime('%Y-%m-%d')
113 134.4766 MiB 0.0000 MiB 1 as_of_op_new = parser.parse(data.get("as_of_op_new"))
114 134.4766 MiB 0.0000 MiB 1 as_of_op_new = as_of_op_new.strftime('%Y-%m-%d')
115 134.4766 MiB 0.0000 MiB 1 ts_start = parser.parse(data.get("ts_start"))
116 134.4766 MiB 0.0000 MiB 1 ts_end = parser.parse(data.get("ts_end"))
117
118 143.6016 MiB 0.0000 MiB 12 for book_name in my_list:
119 143.6016 MiB 6.7500 MiB 22 op_df_old = pd.DataFrame(get_op_power_data(
120 143.6016 MiB 0.0000 MiB 11 as_of=as_of_op_old, book_op=book_name, ts_start=ts_start, ts_end=ts_end
121 ))
122
123 143.6016 MiB 2.3750 MiB 22 op_df_new = pd.DataFrame(get_op_power_data(
124 143.6016 MiB 0.0000 MiB 11 as_of=as_of_op_new, book_op=book_name, ts_start=ts_start, ts_end=ts_end
125 ))
126
127 143.6016 MiB 0.0000 MiB 11 del op_df_old
128 143.6016 MiB 0.0000 MiB 11 del op_df_new
129 143.6016 MiB 0.0000 MiB 11 gc.collect()
130
131 143.6016 MiB 0.0000 MiB 1 return pd.DataFrame()
</code></pre>
<p>This is the db call:</p>
<pre><code>def get_op_power_data(as_of, book_op, ts_start, ts_end):
try:
execution = session.execute("SELECT time, value, is_peak FROM series_op_power WHERE as_of=%s AND name = %s AND time >= %s AND time < %s",
(as_of, book_op, ts_start, ts_end))
return list(execution)
except Exception as e:
return []
</code></pre>
<p>I have the same example with a for loop where I store something in some local variables and delete them afterwards and the memory decreases afterwards (at least for one variable we can see it).</p>
<pre><code>Line # Mem usage Increment Occurrences Line Contents
=============================================================
120 109.5000 MiB 109.5000 MiB 1 @profile(stream=log_file)
121 def get_data_op_power_d2d(data: dict):
122 109.5000 MiB 0.0000 MiB 14 my_list = [string.strip() for string in data.get("books").split(",")]
123 109.5000 MiB 0.0000 MiB 1 if data.get("as_of_op_old") == 'null' or data.get("as_of_op_new") == 'null':
124 abort(HTTP_404_NOT_FOUND, f'No data previous for as_of_op_old = {data.get("as_of_op_old")} ')
125
134 109.5000 MiB 0.0000 MiB 1 results_list = []
135
136 125.0117 MiB -0.4062 MiB 12 for book_name in my_list:
148 117.4180 MiB 7.2188 MiB 11 a = [1] * (10 ** 6)
149 269.9414 MiB 1677.5938 MiB 11 b = [2] * (2 * 10 ** 7)
150 117.4180 MiB -1678.0820 MiB 11 del b
151 125.0117 MiB 7.1875 MiB 11 del a
152
153 125.0117 MiB 0.0000 MiB 1 log_file.flush()
154
155 125.0117 MiB 0.0000 MiB 1 del my_list
156 125.0117 MiB 0.0000 MiB 1 gc.collect()
157 125.0117 MiB 0.0000 MiB 1 return results_list
</code></pre>
<p>I have multiple operations inside the initial function and there is a memory leak in Grafana when I monitor the app so I tried to reduce it even further by removing the database connection and just reading the data from csv files but I still see the RAM memory keeps going up and doesn't decrease back to a previous state.</p>
<p><a href="https://i.sstatic.net/3KR2e7Ll.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KR2e7Ll.png" alt="enter image description here" /></a></p>
<p>Logs to show the memory increase :</p>
<pre><code>System Memory: 64.0% used (9.16 GB / 15.44 GB)
App Process: 0.7% used (109.11 MB RSS, 836.41 MB VMS)
[2025-09-18 11:10:34] Iteration 29878
----------------------------------------
System Memory: 64.3% used (9.20 GB / 15.44 GB)
App Process: 0.8% used (122.17 MB RSS, 915.45 MB VMS)
[2025-09-18 11:10:39] Iteration 29879
----------------------------------------
System Memory: 64.1% used (9.17 GB / 15.44 GB)
App Process: 0.8% used (122.17 MB RSS, 915.45 MB VMS)
</code></pre>
<p>And the logs to show the endpoint call:</p>
<pre><code>127.0.0.1 - - [18/Sep/2025 11:10:32] "GET /d2d_new_2/power?books=DE_STC&as_of_op_old=2025-08-06&as_of_op_new=2025-08-15&ts_start=2025-01-01T00:00:00.000%2B01:00&ts_end=2026-01-01T00:00:00.000%2B01:00&aggr=quarter HTTP/1.1" 200 -
127.0.0.1 - - [18/Sep/2025 11:10:34] "GET /info/memory HTTP/1.1" 200 -
127.0.0.1 - - [18/Sep/2025 11:10:39] "GET /info/memory HTTP/1.1" 200 -
127.0.0.1 - - [18/Sep/2025 11:10:44] "GET /info/memory HTTP/1.1" 200 -
</code></pre>
|
<python><flask><cassandra><memory-leaks><datastax-astra>
|
2025-09-18 08:20:09
| 0
| 886
|
R13mus
|
79,767,913
| 2,409,039
|
Modulenotfounderror When Installing Python Package from Github
|
<p>When installing the fantraxapi python package using pip normally (<code>pip install fantraxapi</code>), it yields no errors. However, when trying to install it from a git repo, it has a modulenotfounderror for requests.</p>
<pre><code>pip install git+https://github.com/meisnate12/FantraxAPI.git@master
Collecting git+https://github.com/meisnate12/FantraxAPI.git@master (from -r requirements.frozen (line 1))
Cloning https://github.com/meisnate12/FantraxAPI.git (to revision master) to /tmp/pip-req-build-xfx4i9d6
Running command git clone --filter=blob:none --quiet https://github.com/meisnate12/FantraxAPI.git /tmp/pip-req-build-xfx4i9d6
Resolved https://github.com/meisnate12/FantraxAPI.git to commit 1ecc423a2e4c574bdb92679315450b8bcbe36de4
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
ร Getting requirements to build wheel did not run successfully.
โ exit code: 1
โฐโ> [24 lines of output]
Traceback (most recent call last):
File "/home/weezy/.virtualenvs/eloSystem/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 389, in <module>
main()
File "/home/weezy/.virtualenvs/eloSystem/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 373, in main
json_out["return_val"] = hook(**hook_input["kwargs"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/weezy/.virtualenvs/eloSystem/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 143, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-dcokerv8/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 331, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-dcokerv8/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 301, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-dcokerv8/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 512, in run_setup
super().run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-dcokerv8/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 317, in run_setup
exec(code, locals())
File "<string>", line 1, in <module>
File "/tmp/pip-req-build-xfx4i9d6/fantraxapi/__init__.py", line 3, in <module>
from fantraxapi.fantrax import FantraxAPI
File "/tmp/pip-req-build-xfx4i9d6/fantraxapi/fantrax.py", line 3, in <module>
from requests import Session
ModuleNotFoundError: No module named 'requests'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>I'm trying to install one of the forks of this repo. How can I get rid of this error?</p>
|
<python><git><packaging>
|
2025-09-18 01:14:11
| 2
| 1,313
|
riders994
|
79,767,759
| 1,658,617
|
Loading SciPy offline using Pyodide
|
<p>I wish to load SciPy on Pyodide without using a CDN.</p>
<p>For that, I installed the pyodide package using npm, and within a webworker I ran</p>
<pre><code>pyodide.loadPackage('/wheels/scipy-1.14.1-cp313-cp313-pyodide_2025_0_wasm32.whl')
</code></pre>
<p>The pyodide version on the node project was pinned to 0.28.2 and the wheel was extracted from the full 0.28.2 zip.</p>
<p>All other packages (e.g. numpy and others) load successfully, yet on scipy it seems to try and dynamically load <code>libopenblas.so</code>.</p>
<p>I suspected that it is the same as <code>libopenblas-0.3.26.zip</code> within the full package, and transfered it to the same <code>/wheels</code> path (even though it is not a wheel) and it doesn't seem to work.</p>
<p>How do I load <code>libopenblas</code> using pyodide so scipy can find it?</p>
<p>Full error for reference:</p>
<pre><code>The following error occurred while loading scipy:
Failed to load dynamic library /lib/python3.13/site-packages/scipy/integrate/_dop.cpython-313-wasm32-emscripten.so: 'Could not load dynamic lib: /lib/python3.13/site-packages/scipy/integrate/_dop.cpython-313-wasm32-emscripten.so
Error: Failed to find dynamic library "libopenblas.so"
PythonError: Traceback (most recent call last):
File "/lib/python313.zip/_pyodide/_base.py", line 597, in eval_code_async
await CodeRunner(
...<9 lines>...
.run_async(globals, locals)
File "/lib/python313.zip/_pyodide/_base.py", line 411, in run_async
coroutine = eval(self.code, globals, locals)
File "<exec>", line 22, in <module>
File "/lib/python3.13/site-packages/proj/__init__.py", line 3, in <module>
from scipy.signal import firwin
File "/lib/python3.13/site-packages/scipy/signal/__init__.py", line 307, in <module>
from . import _sigtools, windows
File "/lib/python3.13/site-packages/scipy/signal/windows/__init__.py", line 42, in <module>
from ._windows import *
File "/lib/python3.13/site-packages/scipy/signal/windows/_windows.py", line 7, in <module>
from scipy import linalg, special, fft as sp_fft
File "/lib/python3.13/site-packages/scipy/__init__.py", line 134, in __getattr__
return _importlib.import_module(f'scipy.{name}')
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/lib/python313.zip/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.13/site-packages/scipy/linalg/__init__.py", line 203, in <module>
from ._misc import *
File "/lib/python3.13/site-packages/scipy/linalg/_misc.py", line 3, in <module>
from .blas import get_blas_funcs
File "/lib/python3.13/site-packages/scipy/linalg/blas.py", line 213, in <module>
from scipy.linalg import _fblas
ImportError: Could not load dynamic lib: /lib/python3.13/site-packages/scipy/linalg/_fblas.cpython-313-wasm32-emscripten.so
Error: bad export type for 'srotg_': undefined
</code></pre>
|
<python><scipy><openblas><pyodide>
|
2025-09-17 19:46:31
| 2
| 27,490
|
Bharel
|
79,767,693
| 7,886,407
|
Explain a class that appears to extend itself
|
<p>I'm trying to understand some aspects of the Python TKinter package.</p>
<p>The package root <a href="https://github.com/python/cpython/tree/a9b6b091411a4b54421b2f81edad9778d374e3f8/Lib/tkinter" rel="nofollow noreferrer"><code>cpython/Lib/tkinter</code></a> is found on GitHub. I'm looking at the file <a href="https://github.com/python/cpython/blob/a9b6b091411a4b54421b2f81edad9778d374e3f8/Lib/tkinter/ttk.py" rel="nofollow noreferrer"><code>cpython/Lib/tkinter/ttk.py</code></a>. At line 512, this file defines a <code>Widget</code> class:</p>
<pre class="lang-py prettyprint-override"><code>28 import tkinter
...
512 class Widget(tkinter.Widget):
"""Base class for Tk themed widgets."""
</code></pre>
<p>Figuring out what this is doing requires identifying this imported <code>tkinter</code> package. According to the Python docs, a directory containing a file <code>__init__.py</code> constitutes a <a href="https://docs.python.org/3/reference/import.html#regular-packages" rel="nofollow noreferrer">regular package</a>. Since [<code>cpython/Lib/tkinter/__init__.py</code>] exists, the directory <code>cpython/Lib/tkinter/</code> is a regular package. My understanding is that when interpreting <code>import tkinter</code>, the current directory is one of the first places Python will look for a package (though that understanding is difficult to verify from the <a href="https://docs.python.org/3/reference/import.html#searching" rel="nofollow noreferrer">"Searching"</a> portion of the docs). If true, then what's imported by the <code>import tkinter</code> line of <code>cpython/Lib/tkinter/ttk.py</code> is the folder <code>cpython/Lib/tkinter/</code> itself.</p>
<p>Since the file <code>cpython/Lib/tkinter/ttk.py</code> is the only place where a <code>Widget</code> class is defined in this directory (at least as far as I can tell from the GitHub search function), then it appears that the code in <code>cpython/Lib/tkinter/ttk.py</code></p>
<pre class="lang-py prettyprint-override"><code>28 import tkinter
...
512 class Widget(tkinter.Widget):
"""Base class for Tk themed widgets."""
</code></pre>
<p>defines a class that extends itself.</p>
<p>Surely there's something I don't understand. What is going on here?</p>
|
<python><tkinter>
|
2025-09-17 18:35:15
| 2
| 531
|
Darien Marks
|
79,767,509
| 16,525,263
|
How to validate the latest subdirectories in HDFS using pyspark
|
<p>I need to check the subdirectory status in HDFS and based on that I need to flag a column.
My desired output is:</p>
<pre><code>Date SourceName SourceType IndName IndValue
2024.01.30 PPL app PPL_Repo OK
2024.01.30 INT app INT_Repo OK
</code></pre>
<p>The IndValue should have "OK" or "KO" values based on the date folders present inside the path. If the path contains today folder
or yesterday folder then its "OK" else "KO"
I have multiple dictionary elements to check and update the indicator.</p>
<p>This is my code as of now</p>
<pre><code>dict_kpi = \
{
'PPL_repo': \
{
'path' : '/path/to/ppl/',
'SourceName' : 'PPL',
'SourceType' : 'app',
'format' : 'parquet',
'filter' : None,
'datepattern' : 'date=%Y-%m-%d',
'IndValue' : None,
'IndName' : 'PPL_Repo'
},
'INT_Repo': \
{
'path' : '/path/to/int/',
'SourceName' : 'INT',
'SourceType' : 'app',
'format' : 'json',
'filter' : 'context == sign',
'datepattern' : '%Y-%m-%d',
'IndValue' : None,
'IndName' : 'INT_Repo'
}
}
dfs = []
for _date in [datetime(2024,01,30)]:
for j in dict_kpi.values():
cols_selected = \
[
F.lit(str(_date)[0:10]).alias('Date'),
F.lit(j['SourceName']).alias('SourceName'),
F.lit(j['SourceType']).alias('SourceType'),
F.lit(j['IndName']).alias('IndName'),
j['IndValue'].alias('IndValue')
]
dfs.append(spark.read.format(j['format']).load(os.path.join(j['path'],
_date.strftime(j['datepattern']))).select(*cols_selected))
dfs[0].show()
</code></pre>
<p>This is not working as expected. I will have many elements in the dictionary with different path, fileformat, datepattern and different filters</p>
<p>If difference between the two dates is more than 1 day, apply a "KO" flag.
Otherwise check the presence of filter. If exists then "OK" else "KO".</p>
<p>What changes are required for this.</p>
|
<python><pyspark>
|
2025-09-17 15:10:22
| 0
| 434
|
user175025
|
79,767,497
| 11,598,948
|
`ipyleaflet` shows a grey map after modifying medium-sized data
|
<p>I have an app made with Shiny for Python which displays an <code>ipyleaflet</code> map with a couple thousands of markers.</p>
<p>On startup, the map renders fine. However, if I press the "Reload" button to mimick some computations, then the entire map becomes gray and unusable. This only happens when the number of observations in <code>sample()</code> is high enough. In the example below, there are 1,000 points on the same lat/lon.</p>
<pre class="lang-py prettyprint-override"><code>from shinywidgets import render_widget, output_widget
from ipyleaflet import (
Map,
CircleMarker,
LayerGroup,
)
from shiny import App, reactive, ui
import geopandas as gpd
geo_str = """{
"type": "FeatureCollection",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } },
"features": [
{ "type": "Feature", "properties": { "id": "ak16994521", "mag": 2.3, "time": 1507425650893, "felt": null, "tsunami": 0 }, "geometry": { "type": "Point", "coordinates": [ -151.5129, 63.1016, 0.0 ] } },
]}
"""
dat = gpd.read_file(geo_str, driver="GeoJSON")
app_ui = ui.page_fluid(ui.input_action_button("reload", "reload"), output_widget("map"))
def server(input, output, session):
reactive_dat = reactive.Value(dat.sample(n=1000, replace=True))
@reactive.effect
@reactive.event(input.reload)
def _():
reactive_dat.set(dat.sample(n=1000, replace=True))
@render_widget
def map():
map = Map(zoom=1)
markers = []
for idx, row in reactive_dat().iterrows():
marker = CircleMarker(location=(row.geometry.y, row.geometry.x), radius=3)
markers.append(marker)
map.add_layer(LayerGroup(layers=markers))
return map
app = App(app_ui, server)
</code></pre>
<p>I see this error in the browser console:</p>
<pre><code>Uncaught (in promise) TypeError: t is undefined
create_view libembed-amd.js:25
create_child_view libembed-amd.js:69
add_layer_model Map.js:186
update libembed-amd.js:69
render_leaflet Map.js:242
promise callback*render_leaflet Map.js:240
promise callback*render Map.js:236
state_change libembed-amd.js:25
promise callback*t.prototype.create_view/t.state_change< libembed-amd.js:25
promise callback*t.prototype.create_view libembed-amd.js:25
renderValue output.ts:94
onValueChange outputBinding.ts:26
onValueChange outputAdapter.ts:34
receiveOutput shinyapp.ts:515
_init shinyapp.ts:725
_sendMessagesToHandlers shinyapp.ts:695
dispatchMessage shinyapp.ts:670
onmessage shinyapp.ts:256
startActionQueueLoop shinyapp.ts:285
</code></pre>
<p>I'm using <code>uv</code> with <code>pyproject.toml</code>:</p>
<pre><code>[project]
name = "test-shiny-app"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.11"
dependencies = [
"geopandas>=1.1.0",
"ipykernel>=6.29.5",
"ipyleaflet==0.17.3",
"ipywidgets==8.1.7",
"shiny==1.5.0",
"shinywidgets>=0.7.0",
]
</code></pre>
<p>Did anyone encounter this before? Are there some workarounds?</p>
<p>Reported on github but it may take a while to have an answer there so maybe someone here has info on this: <a href="https://github.com/posit-dev/py-shiny/issues/2088" rel="nofollow noreferrer">https://github.com/posit-dev/py-shiny/issues/2088</a></p>
|
<python><py-shiny>
|
2025-09-17 14:57:39
| 0
| 8,865
|
bretauv
|
79,767,425
| 7,321,700
|
Merging Pandas dataframes on column combinations
|
<p>Scenario: I am trying to merge 2 pandas dataframes. DF1 has the bulk data, and DF2 is a sort of mapping. Based on the combination of the values of 3 different columns, I want to put a column from DF2 into DF1.</p>
<p>Data sample (just a snippet of both dfs):
DF1:</p>
<pre><code>+-------+---------------------------------------------+---------------------------------------------+------+----------------+----------------+
| AGENT | 5283 | 5288 | 5318 | 7934 | 7935 |
+-------+---------------------------------------------+---------------------------------------------+------+----------------+----------------+
| 33 | No | No | No | No | No |
| 34 | Yes, formal discussion | General reference | No | No | No |
| 55 | No | No | No | No | No |
| 129 | No | No | No | No | No |
| 307 | No | No | No | No | No |
| 441 | Yes, formal discussion | Formal board consideration | No | No | No |
| 522 | Yes, specific reference but limited details | Formal board commtment with limited details | No | Not Meaningful | Not Meaningful |
| 690 | No | No | No | Not Meaningful | Not Meaningful |
| 749 | Yes, formal discussion | General reference | No | No | No |
| 1011 | No | No | No | No | No |
| 1067 | Yes, formal discussion | Formal board consideration | No | Not Meaningful | Not Meaningful |
| 1272 | No | No | No | Not Meaningful | Not Meaningful |
| 1592 | No | No | No | Not Meaningful | Not Meaningful |
| 1908 | Yes, formal discussion | Formal board commtment with limited details | No | No | No |
| 1949 | No | No | No | Not Meaningful | Not Meaningful |
| 2040 | Yes, formal discussion | Formal board consideration | No | No | No |
| 2102 | Yes, formal discussion | Formal board consideration | No | No | No |
| 2114 | Yes, formal discussion | Formal board consideration | No | Not Meaningful | Not Meaningful |
| 2266 | Yes, formal discussion | Formal board commtment with limited details | No | No | No |
| 2365 | Yes, formal discussion | Formal board consideration | No | No | No |
| 2674 | Yes, formal discussion | Formal board consideration | No | No | No |
| 3109 | No | General reference | No | No | No |
| 3170 | Yes, specific reference but limited details | Formal board commtment with limited details | No | Not Meaningful | Not Meaningful |
| 3295 | Yes, specific reference but limited details | Formal board commtment with limited details | No | No | No |
| 3323 | General reference | General reference | No | Not Meaningful | Not Meaningful |
| 3366 | Yes, specific reference but limited details | Formal board consideration | No | No | No |
| 3840 | General reference | Formal board commtment with limited details | No | No | No |
| 3914 | Yes, specific reference but limited details | Formal board commtment with limited details | No | No | No |
| 3967 | Yes, formal discussion | Formal board consideration | No | Yes | No |
| 4108 | No | No | No | No | No |
+-------+---------------------------------------------+---------------------------------------------+------+----------------+----------------+
</code></pre>
<p>DF2:</p>
<pre><code>+---------------------------------------------+---------------------------------------------+------+-------+
| 5283 | 5288 | 5318 | SCORE |
+---------------------------------------------+---------------------------------------------+------+-------+
| Yes, formal discussion | Formal board consideration | Yes | 10 |
| Yes, formal discussion | Formal board consideration | No | 8 |
| Yes, formal discussion | Formal board commtment with limited details | Yes | 7 |
| Yes, formal discussion | Formal board commtment with limited details | No | 6 |
| Yes, formal discussion | General reference | Yes | 6 |
| Yes, formal discussion | General reference | No | 5 |
| Yes, formal discussion | No specific reference | Yes | 4 |
| Yes, formal discussion | No specific reference | No | 2 |
| Yes, specific reference but limited details | Formal board consideration | Yes | 8 |
| Yes, specific reference but limited details | Formal board consideration | No | 7 |
| Yes, specific reference but limited details | Formal board commtment with limited details | Yes | 6 |
| Yes, specific reference but limited details | Formal board commtment with limited details | No | 5 |
| Yes, specific reference but limited details | General reference | Yes | 5 |
| Yes, specific reference but limited details | General reference | No | 5 |
| Yes, specific reference but limited details | No specific reference | Yes | 3 |
| Yes, specific reference but limited details | No specific reference | No | 2 |
| General reference | Formal board consideration | Yes | 7 |
| General reference | Formal board consideration | No | 6 |
| General reference | Formal board commtment with limited details | Yes | 4 |
| General reference | Formal board commtment with limited details | No | 4 |
| General reference | General reference | Yes | 4 |
| General reference | General reference | No | 3 |
| General reference | No specific reference | Yes | 3 |
| General reference | No specific reference | No | 1 |
| No reference | Formal board consideration | Yes | 6 |
+---------------------------------------------+---------------------------------------------+------+-------+
</code></pre>
<p>Objective: Using columns 5283, 5288 and 5318, I need to add the Score value to DF1 as a new column.</p>
<p>What I tried: I tried adding a new combined column to both dfs and use it as a merge index, but still, the Score column values in DF always end up as nan:</p>
<pre><code>Gov_JT_pivot['merge_key'] = tuple(zip(Gov_JT_pivot['5283'], Gov_JT_pivot['5288'], Gov_JT_pivot['5318']))
Gov_lookup_df['merge_key'] = tuple(zip(Gov_lookup_df['5283'], Gov_lookup_df['5288'], Gov_lookup_df['5318']))
# Perform the merge using the combined key
Gov_JT_pivot= pd.merge(Gov_JT_pivot, Gov_lookup_df[['merge_key', 'SCORE']], on='merge_key', how='left')
</code></pre>
<p>Which results in:</p>
<pre><code>+-------+---------------------------------------------+---------------------------------------------+------+------------------------------------------------------------------------------------------------------+-------+
| AGENT | 5283 | 5288 | 5318 | merge_key | SCORE |
+-------+---------------------------------------------+---------------------------------------------+------+------------------------------------------------------------------------------------------------------+-------+
| 33 | No | No | No | ('No', 'No', 'No') | |
| 34 | Yes, formal discussion | General reference | No | ('Yes, formal discussion', 'General reference', 'No') | |
| 55 | No | No | No | ('No', 'No', 'No') | |
| 129 | No | No | No | ('No', 'No', 'No') | |
| 307 | No | No | No | ('No', 'No', 'No') | |
| 441 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 522 | Yes, specific reference but limited details | Formal board commtment with limited details | No | ('Yes, specific reference but limited details', 'Formal board commtment with limited details', 'No') | |
| 690 | No | No | No | ('No', 'No', 'No') | |
| 749 | Yes, formal discussion | General reference | No | ('Yes, formal discussion', 'General reference', 'No') | |
| 1011 | No | No | No | ('No', 'No', 'No') | |
| 1067 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 1272 | No | No | No | ('No', 'No', 'No') | |
| 1592 | No | No | No | ('No', 'No', 'No') | |
| 1908 | Yes, formal discussion | Formal board commtment with limited details | No | ('Yes, formal discussion', 'Formal board commtment with limited details', 'No') | |
| 1949 | No | No | No | ('No', 'No', 'No') | |
| 2040 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 2102 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 2114 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 2266 | Yes, formal discussion | Formal board commtment with limited details | No | ('Yes, formal discussion', 'Formal board commtment with limited details', 'No') | |
| 2365 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 2674 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 3109 | No | General reference | No | ('No', 'General reference', 'No') | |
| 3170 | Yes, specific reference but limited details | Formal board commtment with limited details | No | ('Yes, specific reference but limited details', 'Formal board commtment with limited details', 'No') | |
| 3295 | Yes, specific reference but limited details | Formal board commtment with limited details | No | ('Yes, specific reference but limited details', 'Formal board commtment with limited details', 'No') | |
| 3323 | General reference | General reference | No | ('General reference', 'General reference', 'No') | |
| 3366 | Yes, specific reference but limited details | Formal board consideration | No | ('Yes, specific reference but limited details', 'Formal board consideration', 'No') | |
| 3840 | General reference | Formal board commtment with limited details | No | ('General reference', 'Formal board commtment with limited details', 'No') | |
| 3914 | Yes, specific reference but limited details | Formal board commtment with limited details | No | ('Yes, specific reference but limited details', 'Formal board commtment with limited details', 'No') | |
| 3967 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 4108 | No | No | No | ('No', 'No', 'No') | |
| 4525 | Yes, specific reference but limited details | Formal board commtment with limited details | No | ('Yes, specific reference but limited details', 'Formal board commtment with limited details', 'No') | |
| 4528 | General reference | No | No | ('General reference', 'No', 'No') | |
| 4608 | Yes, formal discussion | Formal board consideration | No | ('Yes, formal discussion', 'Formal board consideration', 'No') | |
| 4641 | No | No | No | ('No', 'No', 'No') | |
| 4650 | No | Formal board consideration | No | ('No', 'Formal board consideration', 'No') | |
+-------+---------------------------------------------+---------------------------------------------+------+------------------------------------------------------------------------------------------------------+-------+
</code></pre>
<p>Question: What am I doing incorrectly and how to fix it?</p>
|
<python><pandas><dataframe>
|
2025-09-17 13:56:09
| 2
| 1,711
|
DGMS89
|
79,767,306
| 8,411,980
|
DuckDB query that works with time intervals produces incorrect values
|
<p>Running through python - no tables needed. See below query and result:</p>
<pre class="lang-py prettyprint-override"><code>import duckdb
sampling_period_sec = 13
date_range = ('2023-01-01', '2023-01-02')
db_conn = duckdb.connect()
db_conn.query(
f"""
DROP TABLE IF EXISTS date_range_query;
CREATE TEMPORARY TABLE date_range_query AS
SELECT
DATE_ADD(
CAST('1970-01-01 00:00:00' AS TIMESTAMP),
TO_SECONDS(CAST(
CEIL(
EXTRACT(EPOCH FROM RANGE)
/ {sampling_period_sec}) AS INT
) * {sampling_period_sec}
)
) AS ts_event
FROM RANGE(
TIMESTAMP '{date_range[0]}',
TIMESTAMP '{date_range[1]}',
INTERVAL {sampling_period_sec} SECOND);
SELECT
t1.ts_event
,t2.ts_event as ts_window
,EXP((MILLISECOND(t2.ts_event - t1.ts_event) / 10**3) / {sampling_period_sec} / {9}) as inter_decay
FROM date_range_query t1
JOIN date_range_query t2
ON t2.ts_event BETWEEN t1.ts_event - INTERVAL
{int(10 * sampling_period_sec)} SECOND AND t1.ts_event
AND t1.ts_event in ('2023-01-01 00:01:03', '2023-01-01 00:01:16')
ORDER BY 1, 2
"""
).df()
</code></pre>
<p>Result:</p>
<pre>
ts_event ts_window inter_decay
0 2023-01-01 00:01:03 2023-01-01 00:00:11 0.641180
1 2023-01-01 00:01:03 2023-01-01 00:00:24 0.716531
2 2023-01-01 00:01:03 2023-01-01 00:00:37 0.800737
3 2023-01-01 00:01:03 2023-01-01 00:00:50 0.894839
4 2023-01-01 00:01:03 2023-01-01 00:01:03 1.000000
<b>5 2023-01-01 00:01:16 2023-01-01 00:00:11 0.958165</b>
6 2023-01-01 00:01:16 2023-01-01 00:00:24 0.641180
7 2023-01-01 00:01:16 2023-01-01 00:00:37 0.716531
8 2023-01-01 00:01:16 2023-01-01 00:00:50 0.800737
9 2023-01-01 00:01:16 2023-01-01 00:01:03 0.894839
10 2023-01-01 00:01:16 2023-01-01 00:01:16 1.000000
</pre>
<p>The <b>bold-face</b> row, 5, is the problem. The <code>inter_decay</code> should be lower than in row 6, but it is not. Should be <code>0.624949</code> but it is <code>0.958165</code>.</p>
<p>I was able to show that:</p>
<pre><code>SELECT
MICROSECOND(TIMESTAMP '2023-01-01 00:00:24' - TIMESTAMP '2023-01-01 00:01:06'),
MICROSECOND(TIMESTAMP '2023-01-01 00:00:11' - TIMESTAMP '2023-01-01 00:01:06')
</code></pre>
<p>Produces: -52000000, -5000000</p>
<p>This explains the problem. But then after playing a little with the calculations (using MICROSECONDS/MILLISECONDS and POSITIVE/NEGATIVE intervals) the values for the interval came out correct. But the main query above still produces a wrong decay. This looks like some problem with DuckDB's engine, because I can't figure out what I am doing incorrectly, if anything.</p>
<p><b> ADDITION:</b></p>
<p>Adding another column:</p>
<pre><code>MILLISECOND(t2.ts_event - t1.ts_event) / 10**3 as delta_time
</code></pre>
<p>Results - shows my claim:</p>
<pre>
ts_event ts_window delta_time inter_decay
0 2023-01-01 00:01:03 2023-01-01 00:00:11 -52.0 0.641180
1 2023-01-01 00:01:03 2023-01-01 00:00:24 -39.0 0.716531
2 2023-01-01 00:01:03 2023-01-01 00:00:37 -26.0 0.800737
3 2023-01-01 00:01:03 2023-01-01 00:00:50 -13.0 0.894839
4 2023-01-01 00:01:03 2023-01-01 00:01:03 0.0 1.000000
5 2023-01-01 00:01:16 2023-01-01 00:00:11 -5.0 0.958165
6 2023-01-01 00:01:16 2023-01-01 00:00:24 -52.0 0.641180
7 2023-01-01 00:01:16 2023-01-01 00:00:37 -39.0 0.716531
8 2023-01-01 00:01:16 2023-01-01 00:00:50 -26.0 0.800737
9 2023-01-01 00:01:16 2023-01-01 00:01:03 -13.0 0.894839
10 2023-01-01 00:01:16 2023-01-01 00:01:16 0.0 1.000000
</pre>
|
<python><sql><duckdb>
|
2025-09-17 12:16:51
| 1
| 573
|
AOK
|
79,766,854
| 1,719,931
|
How to create a cross table with percentages in Polars?
|
<p>I would like to create a cross table that shows, in each cell, the percentages of rows over the total number of rows.</p>
<p>Inspired by <a href="https://stackoverflow.com/a/77097672/1719931">this post</a> I started with:</p>
<pre><code>df = pl.DataFrame({"a": [2, 0, 1, 0, 0, 0], "b": [1, 1, 1, 0, 0, 1]})
crosstab = (
df.pivot(on="b", index="a", values="b", aggregate_function="len", sort_columns=True)
.fill_null(0)
.sort("a")
)
crosstab
</code></pre>
<p>and then inspired by <a href="https://docs.pola.rs/user-guide/expressions/expression-expansion/#programmatically-generating-expressions" rel="nofollow noreferrer">polars' user guide</a> I tried to convert values into percentages with:</p>
<pre><code>def perc_cols(df):
tot = df.select(~cs.by_index(0)).to_numpy().sum()
for col in df.columns[1:]:
yield ((pl.col(col) / tot) * 100)
crosstab.select(cs.by_index(0), perc_cols(crosstab))
</code></pre>
<p>but I get an error:</p>
<pre><code>TypeError: cannot create expression literal for value of type generator.
</code></pre>
<p>notice that both <code>crosstab.select(cs.by_index(0))</code> and <code>crosstab.select(perc_cols(crosstab))</code> works as expected.</p>
|
<python><dataframe><pivot><python-polars>
|
2025-09-17 03:36:21
| 2
| 5,202
|
robertspierre
|
79,766,750
| 629,186
|
How can I get the accurate time spent in the pytest-html report when the tests are run in parallel (via pytest-xdist)?
|
<p>I've noticed an error in the report generated by <code>pytest-html</code> and I'm surprised I can't find it mentioned anywhere.</p>
<p>I am running a series of tests (70 total) and using <code>pytest-html</code> to generate a report on the results. I am also using <code>pytest-xdist</code> so that the tests run in parallel.</p>
<p>When the test run is complete, I look at the terminal to how long it took for everything and see:</p>
<pre class="lang-none prettyprint-override"><code>2 failed, 68 passed in 146.09s (0:02:26)
</code></pre>
<p>These are UI tests so two and a half minutes isn't that bad; also it's why these need to be run in parallel.</p>
<p>But when I look at the <code>pytest-html</code> report, it says:</p>
<pre class="lang-none prettyprint-override"><code>70 tests took 00:23:32.
</code></pre>
<p>The report doesn't look at the actual start and finish time. It's using the times of each of the tests and summing them together.</p>
<p>How can I update the report to show the real time spent, not the total of the individual tests?</p>
|
<python><pytest><pytest-html><pytest-xdist>
|
2025-09-16 22:51:23
| 1
| 1,817
|
MivaScott
|
79,766,638
| 21,826,195
|
AttributeError: 'AsyncSniffer' object has no attribute 'stop_cb'
|
<p>When running the following code:</p>
<pre class="lang-python prettyprint-override"><code>import time
from scapy.all import AsyncSniffer
sniffy = AsyncSniffer(iface="wlp0s20f3", filter="not arpand not port 22")
sniffy.start()
time.sleep(1)
sniffy.stop()
</code></pre>
<p>I get:</p>
<pre><code> def stop(self, join=True):
"""Stops AsyncSniffer if not in async mode"""
if self.running:
try:
> self.stop_cb()
E AttributeError: 'AsyncSniffer' object has no attribute 'stop_cb'
venv/lib/python3.8/site-packages/scapy/sendrecv.py:1017: AttributeError
</code></pre>
<p>I found this similar question: <a href="https://stackoverflow.com/questions/70559382/scapy-asyncsniffer-fails-if-stop-is-called-right-after-start">Scapy AsyncSniffer fails if stop() is called right after start()</a>.
However, in my case the issue was not fixed although I added the <code>sleep(1)</code>.</p>
<p><sub>I won't bore you by adding more debugging. By writing a minimal reproducible example, it became obvious: the filter is broken</sub></p>
|
<python><scapy>
|
2025-09-16 20:07:24
| 1
| 2,028
|
Mo_
|
79,766,617
| 2,482,575
|
python request getting different result than postman
|
<p>I am attempting to improve my python by working on a side-project, just interacting with websites to look at avaialble golf tee times. Zero intention/plans to do anything commercial, this is just a semi-practical way to learn more about the requests library and web site/api interaction.</p>
<p>If I use the same URL, headers, and JSON payload, I get back two different results. In both cases I get status 200, but using postman, the result body is json data of available tee times for a given date and golf course.</p>
<p>If I post the same request using python, I get back html/website data and not the JSON. I've attempted as much debugging as I can, like verifying the headers and body are set the same for both requests. Also no authorization is being used in either postman or python. What else can I check or how can I test this further?</p>
<p>Im including code of the API/website request, as well as screenshots from Postman.</p>
<pre><code>for course, id in golfback_courses.items():
gc_url = "https://api.golfback.com/api/v1/courses/" + id + "/date/" + golf_date + "/teetimes?utm_source=drumm_farm_website&utm_medium=button&utm_campaign=tee_times"
headers = {"Content-Type": "application/json; charset=utf-8"}
payload = json.dumps({'sessionId': None})
print(f"Tee Times for {course} on {golf_date}: ")
response = requests.post(gc_url, headers=headers, json=payload, allow_redirects=True)
#debugging prints and checking status codes
print(response.status_code)
print(dump.dump_all(response).decode("utf-8"))
html_content = response.content
print(type(html_content))
soup = BeautifulSoup(html_content, "html.parser")
#additional json/text manipulation and print logic once I verify I have json object
</code></pre>
<p>I'm also attaching a screenshot of the request and body response from postman. Im showing the headers, which for the exception of the Postman-Token, I verified all are the same on the python side, or, disabled them in postman and verified I still get the same response. The body is simply raw JSON passing in a null sessionId, i.e. {"sessionId": null}. I verified via json.dumps that the {'sessionId': None} in python encodes to sessionId: null.</p>
<p><a href="https://i.sstatic.net/DThL614E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DThL614E.png" alt="enter image description here" /></a></p>
<p>Any further troubleshooting help is appreciated.</p>
|
<python><python-requests>
|
2025-09-16 19:35:16
| 0
| 439
|
tman
|
79,766,610
| 3,357,935
|
How do I use multiple filters in the Google Analytics 4 API from Python?
|
<p>I am using the <a href="https://developers.google.com/analytics/devguides/reporting/data/v1/quickstart" rel="nofollow noreferrer">Google Analytics 4 Data API</a> with the <a href="https://google.analytics.data_v1beta" rel="nofollow noreferrer">Python client library</a> to run reports. I'm able to run a report with a single <code>FilterExpression</code> to filter traffic where the <code>country</code> equals "United States". Now I want to add a second filter to return rows where <code>country</code> equals "United States" and <code>region</code> equals "New York".</p>
<p>However, I've only been able to get the code running with a single filter. Attempting to pass multiple filters has resulted in errors.</p>
<p><strong>Working code with one filter:</strong></p>
<pre class="lang-py prettyprint-override"><code>from google.analytics.data_v1beta import BetaAnalyticsDataClient
from google.analytics.data_v1beta.types import (
DateRange,
Dimension,
Filter,
FilterExpression,
Metric,
RunReportRequest,
)
def sample_run_report(property_id):
client = BetaAnalyticsDataClient()
request = RunReportRequest(
property=f"properties/{property_id}",
dimensions=[
Dimension(name="country"),
Dimension(name="region")
],
metrics=[Metric(name="activeUsers")],
date_ranges=[DateRange(start_date="2020-03-31", end_date="today")],
dimension_filter=FilterExpression(
filter=Filter(
field_name="country",
string_filter=Filter.StringFilter(
value="United States",
match_type=Filter.StringFilter.MatchType.EXACT
)
)
), # TODO: Add second filter for New York region
)
response = client.run_report(request)
print("Report result:")
for row in response.rows:
dimension_values = [dimension.value for dimension in row.dimension_values]
metric_values = [metric.value for metric in row.metric_values]
print(" | ".join(dimension_values + metric_values))
if __name__ == '__main__':
# property_id is user-specific
sample_run_report(property_id="236813742")
</code></pre>
<p><strong>What I've tried:</strong></p>
<p>Providing a list of <code>Filter</code>s for <code>filter=</code></p>
<pre><code>dimension_filter=FilterExpression(
filter=[
Filter(
field_name="country",
string_filter=Filter.StringFilter(
value="United States",
match_type=Filter.StringFilter.MatchType.EXACT
)
),
Filter(
field_name="region",
string_filter=Filter.StringFilter(
value="New York",
match_type=Filter.StringFilter.MatchType.EXACT
)
)
]
)
# TypeError: Message must be initialized with a dict: google.analytics.data.v1beta.FilterExpression
</code></pre>
<p>Giving a list of <code>FilterExpression</code>s for <code>dimension_filter=</code></p>
<pre><code>dimension_filter=[
FilterExpression(
filter=Filter(
field_name="country",
string_filter=Filter.StringFilter(
value="United States",
match_type=Filter.StringFilter.MatchType.EXACT
)
)
),
FilterExpression(
filter=Filter(
field_name="region",
string_filter=Filter.StringFilter(
value="New York",
match_type=Filter.StringFilter.MatchType.EXACT
)
)
)
]
# TypeError: Message must be initialized with a dict: google.analytics.data.v1beta.FilterExpression
</code></pre>
<p>Specifying an <a href="https://developers.google.com/analytics/devguides/reporting/data/v1/rest/v1beta/FilterExpression" rel="nofollow noreferrer"><code>andGroup</code></a> with a <a href="https://developers.google.com/analytics/devguides/reporting/data/v1/rest/v1beta/FilterExpression#FilterExpressionList" rel="nofollow noreferrer"><code>FilterExpressionList</code></a> with filters specified in <code>expressions=</code>.</p>
<pre><code>dimension_filter=FilterExpression(
and_group=FilterExpressionList(
expressions=[
Filter(
field_name="country",
string_filter=Filter.StringFilter(
value="United States",
match_type=Filter.StringFilter.MatchType.EXACT
)
),
Filter(
field_name="region",
string_filter=Filter.StringFilter(
value="New York",
match_type=Filter.StringFilter.MatchType.EXACT
)
)
]
)
)
# TypeError: Parameter to MergeFrom() must be instance of same class: expected <class 'FilterExpression'> got <class 'google.analytics.data_v1beta.types.data.Filter'>.
</code></pre>
<p><strong>Question:</strong></p>
<p>How do I specify more than one dimension filter when using the Google Analytics 4 API from the official Python client library?</p>
|
<python><google-analytics><google-analytics-4>
|
2025-09-16 19:29:24
| 1
| 27,724
|
Stevoisiak
|
79,766,577
| 1,719,931
|
Drop column by index in polars
|
<p>I need to drop the first column in a polars DataFrame.</p>
<p>I tried:</p>
<pre class="lang-py prettyprint-override"><code>result = df.select([col for idx, col in enumerate(df.columns) if idx != 0])
</code></pre>
<p>But it looks long and clumsy for such a simple task?</p>
<p>I also tried:</p>
<pre class="lang-py prettyprint-override"><code>df.select(pl.exclude(pl.nth(0)))
</code></pre>
<p>but that errored out</p>
<pre class="lang-none prettyprint-override"><code># TypeError: invalid input for `exclude`
# Expected one or more `str` or `DataType`; found <Expr ['cs.nth(1, require_all=true)'] at 0x21A964B6350> instead.
</code></pre>
|
<python><dataframe><python-polars>
|
2025-09-16 18:43:03
| 3
| 5,202
|
robertspierre
|
79,766,564
| 13,564
|
How do I pass through the click on a GtkEditableLabel to the GtkColumnView row containing it?
|
<p>I have a <code>Gtk.ColumnView</code> where cells are represented by <code>Gtk.EditableLabel</code>s. Keyboard navigation works fine, clicking in the cells to edit them works fine, tab navigation
works fine.</p>
<p>However, clicking within a row to select that row doesn't work.</p>
<p>Some cells are editable (i.e. <code>label.set_editable(True)</code>) and some are not. The ones which are editable simply go into edit mode when clicked within the bounds of the label, the ones which are not don't react to clicks at all. In order to select a row with the mouse, I have to click in the "dead space" within the row, either between rows/cells or in a part of the cell that is not covered by a short label.</p>
<p>What I expect / want to happen, is for clicking on a non-editable cell to immediately select the row, and for clicking on an editable cell to select its row, and then begin editing.</p>
<p>I can sort of emulate this for non-editable rows by doing <code>label.set_sensitive(False)</code>, but this styles everything with a light-grey "disabled" color, and that's not what I want to indicate; the row is interactable, it's just that that particular cell / column is not editable.</p>
<p>It seems in gtk4, there's no longer any <code>"clicked"</code> signal, so handling that is not possible; I gather I want to do something with a <code>Gtk.Gesture</code> event controller?</p>
<hr />
<p>Update: while I haven't had time to fully isolate this example, there is open source code available here:</p>
<ul>
<li><a href="https://github.com/glyph/Pomodouroboros/blob/1a879f33a4f63cd300c748f322d035cddf234284/src/pomodouroboros/linux/linuxlegacypom.ui" rel="nofollow noreferrer">https://github.com/glyph/Pomodouroboros/blob/1a879f33a4f63cd300c748f322d035cddf234284/src/pomodouroboros/linux/linuxlegacypom.ui</a></li>
<li><a href="https://github.com/glyph/Pomodouroboros/blob/1a879f33a4f63cd300c748f322d035cddf234284/src/pomodouroboros/linux/old_gtk_gui.py" rel="nofollow noreferrer">https://github.com/glyph/Pomodouroboros/blob/1a879f33a4f63cd300c748f322d035cddf234284/src/pomodouroboros/linux/old_gtk_gui.py</a></li>
<li><a href="https://github.com/glyph/Pomodouroboros/blob/1a879f33a4f63cd300c748f322d035cddf234284/src/pomodouroboros/linux/gobj_utils.py" rel="nofollow noreferrer">https://github.com/glyph/Pomodouroboros/blob/1a879f33a4f63cd300c748f322d035cddf234284/src/pomodouroboros/linux/gobj_utils.py</a></li>
</ul>
|
<python><user-interface><gtk><gtk4>
|
2025-09-16 18:23:08
| 1
| 32,066
|
Glyph
|
79,766,387
| 155,861
|
when streamlit_app.py file are under nested folders then ModuleNotFound Error comes for other import s in streamlit cloud
|
<p>we have streamlit based project and deployed on streamlit cloud and it was working fine when main file <em>streamlit_app.py</em> was under <em>src</em> and now move the main file under nested folder of <em>src/alpha/streamlit_app.py</em> and now it shows ModuleNotFoundError of import file.</p>
<p>python v 3.12 and using poetry to run local</p>
<pre class="lang-none prettyprint-override"><code>
โโโ __init__.py
โโโ api/
โ โโโ __init__.py
โ โโโ fast_api.py
โ โโโ routes_helper.py
โโโ assets/
โ โโโ __init__.py
โ โโโ images/
โ โโโ __init__.py
โ โโโ company_logo.png
โโโ db/
โโโ features/
โ โโโ __init__.py
โ โโโ audio_recorder/
โ โ โโโ __init__.py
โ โ โโโ audio_recorder_ui.py
โ โโโ image_description/
โ โ โโโ __init__.py
โ โ โโโ description.py
โโโ alpha/
โ โโโ __init__.py
โ โโโ __main__.py
โ โโโ streamlit_app.py
โโโ utils/
โโโ __init__.py
โโโ _version.py
โโโ aws_helper.py
</code></pre>
<p>here is streamlit_app.py</p>
<pre class="lang-py prettyprint-override"><code>
import sys
from pathlib import Path
import streamlit as st
from features.audio_recorder import audio_recorder_ui
</code></pre>
<p>and it throw error on streamlit cloud asbelow</p>
<pre class="lang-none prettyprint-override"><code>
/home/adminuser/venv/lib/python3.12/site-packages/streamlit/runtime/scriptru
nner/exec_code.py:128 in exec_func_with_error_handling
/home/adminuser/venv/lib/python3.12/site-packages/streamlit/runtime/scriptru
nner/script_runner.py:669 in code_to_exec
/mount/src/tz-script/src/alpha/streamlit_app.py:5 in <module>
2 from pathlib import Path
3
4 import streamlit as st
โฑ 5 from features.audio_recorder import audio_recorder_ui
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
ModuleNotFoundError: No module named 'features'
</code></pre>
<p>everything working fine in local; what could be the issue?</p>
<p>even I tried to move files under alpha or give relative path <code>src.alpha</code> but same error</p>
<p>any solution/suggestion</p>
|
<python><streamlit><python-poetry>
|
2025-09-16 15:10:32
| 1
| 54,183
|
xkeshav
|
79,766,282
| 1,801,359
|
How do I import package when running
|
<p>I have a directory structure that can be simplified like this:</p>
<pre><code>test/
scripts/
a.py
src/
b.py
</code></pre>
<p>In a.py, I have the line:</p>
<p><code>import src.b</code></p>
<p>If I'm in the test directory, I want this to work:</p>
<p><code>python3 scripts/a.py</code></p>
<p>If I'm in the scripts directory, I want this to work:</p>
<p><code>python3 a.py</code></p>
<p>How can I accomplish this?</p>
<p>If I move a.py to the top level, it works.
I don't want to call the script using python -m scripts.a.</p>
<p>Do I need to import sys and add '..'?
Do I need an init.py in the src directory?
Do I need relative imports using <code>import ..src</code>?
Does the current directory vs the script directory matter?</p>
|
<python><python-3.x><import><module><package>
|
2025-09-16 13:25:25
| 2
| 422
|
user1801359
|
79,766,211
| 1,612,369
|
VENV does not include pip in newly created environment
|
<p>I found a number of questions related to my problem but solutions did not really help me. I work on Windows 10 and PowerShell 7, and have currently installed a few Python versions, s.a. 3.11, 3.12 and 3.13. Then, I use launcher <code>py.exe</code> to create virtual environments for different version, since many apps have not been updated to Python 3.13.</p>
<h3>pip is not installed in new environments</h3>
<p>I simply run the following command</p>
<pre><code>py -m venv .venv
</code></pre>
<p>which creates a new environment in the current folder. The following activates <code>.venv</code></p>
<pre><code>.venv\Scripts\Activate.ps1
</code></pre>
<p>A changed prompt to <code>(.venv) PS D ...> </code> demonstrates it's working. I can also confirm that local <code>python</code> version is correctly recognised</p>
<pre><code>Get-Command python
</code></pre>
<p>The path is correct: <code><local path>\.venv\Scripts\python.exe</code>.</p>
<p>Here's the problem. After activating new env, <code>pip</code> is still available as a standalone program but does not work on a local level. If I try to run</p>
<pre><code>pip install numpy --dry-run
</code></pre>
<p>I get the report</p>
<pre><code>Requirement already satisfied: numpy in <user>\appdata\local\programs\python\python313\lib\site-packages (2.3.3)
</code></pre>
<p>it clearly indicates <code>pip</code> is not working correctly. I get evidence of it if I run the command</p>
<pre><code>Get-Command pip
</code></pre>
<p>because the path to <code>pip</code> is</p>
<pre><code><User>\AppData\Local\Programs\Python\Python313\Scripts\pip.exe
</code></pre>
<p>which is incorrect.</p>
<hr />
<h3>Impossible to install new modules</h3>
<p>This leads to the problem of not being able to install new modules in newly created environments because <code>pip</code> is missing. For instance</p>
<pre><code>python -m pip install pip --upgrade
</code></pre>
<p>issues the error: <code><local path>\.venv\Scripts\python.exe: No module named pip</code>, while</p>
<pre><code>Get-Command python
</code></pre>
<p>still demonstrates I am working on a local version of python in virtual environment</p>
<pre><code><local path>\.venv\Scripts\python.exe
</code></pre>
<hr />
<h3>ensurepip does not solve the problem</h3>
<p>I tried to search for solutions and found out pip can be installed if I run the following command</p>
<pre><code>python -m ensurepip
</code></pre>
<p>or</p>
<pre><code>python -m ensurepip --upgrade
</code></pre>
<p>for some reason this results with the following output</p>
<pre><code>Requirement already satisfied: pip in <local path>\.venv\lib\site-packages (25.2)
</code></pre>
<p>which would indicate <code>pip</code> was already installed. I run <code>Get-Children -Recurse -Path ".venv/"</code> to see all subdirectories and programs but <code>pip</code> was nowhere to find</p>
<pre><code> Directory: <Local Path>\.venv
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 16/09/2025 12:20 Include
d---- 16/09/2025 12:20 Lib
d---- 16/09/2025 12:20 Scripts
-a--- 16/09/2025 12:20 71 .gitignore
-a--- 16/09/2025 12:20 338 pyvenv.cfg
Directory: <Local Path>\.venv\Lib
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 16/09/2025 12:20 site-packages
Directory: <Local Path>\.venv\Scripts
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 16/09/2025 12:20 2208 activate
-a--- 16/09/2025 12:20 1022 activate.bat
-a--- 16/09/2025 12:20 2281 activate.fish
-a--- 06/08/2025 16:16 9031 Activate.ps1
-a--- 06/08/2025 16:14 393 deactivate.bat
-a--- 06/08/2025 16:15 255320 python.exe
-a--- 06/08/2025 16:15 251736 pythonw.exe
Directory: <Local Path>\__pycache__
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 29/07/2024 16:12 15096 backup.cpython-312.pyc
</code></pre>
<hr />
<h3>EDIT</h3>
<p>PS. Please mind that <code>Get-Children -Recurse</code> ignores empty folders in further iteration, s.a. <code>Include</code> and <code>Lib\site-packages</code>. I identified them by running this command</p>
<pre><code>Get-ChildItem ".venv/" -Recurse -Directory
| Where-Object {!$_.GetFileSystemInfos().Count}
</code></pre>
<p>which provides the output</p>
<pre><code> Directory: <local path>\.venv
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 16/09/2025 14:35 Include
Directory: <loca lpath>\.venv\Lib
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 16/09/2025 14:35 site-packages
</code></pre>
|
<python><pip><venv>
|
2025-09-16 12:08:31
| 0
| 2,691
|
Celdor
|
79,766,097
| 2,473,382
|
Celery: How can I kill a task at a specific time?
|
<p>I want to be sure that a task is killed at a certain time if it is still running. The context is an overloaded worker, where tasks are not picked up straight away.</p>
<p>Image this "busy" worker (concurrency=1 to simulate business):</p>
<pre class="lang-bash prettyprint-override"><code>uv run celery --app src.tasks.example worker --concurrency 1 --queues celery --loglevel info --hostname default
</code></pre>
<p>I am going to run this task, which just sleeps for the number of seconds given in parameter:</p>
<pre class="lang-py prettyprint-override"><code>@app.task()
def naptime(seconds: float) -> None:
sleep(seconds)
</code></pre>
<p>I submit this task that way:</p>
<pre class="lang-py prettyprint-override"><code>naptime.apply_async(args=(3,))
task2 = naptime.apply_async(args=(3,))
start = time()
result = task2.get()
total = time() - start
</code></pre>
<p>I expect <code>total</code> to be around 6 (2 tasks of 3 seconds, which are executed sequentially) and it is indeed close enough.</p>
<p>Now, I want to make sure that <code>task2</code> is killed 5 seconds after submission time if it runs.</p>
<ul>
<li><p><code>task2 = naptime.apply_async(args=(3,), time_limit=5)</code> => does not work as I want, because it would kill the task 5 seconds after <strong>it is picked up</strong> The task is done by then, but later than 5 seconds after submission.</p>
</li>
<li><p><code>task2 = naptime.apply_async(args=(3,), time_limit=5, expires=5)</code> => does not work as I want, the task does not have time to expire.</p>
</li>
</ul>
<p>Note: in reality, I do not know how long the task will last, so I can not compute an expires (task of S seconds that needs to be done T seconds after submission could have an expire of (T-S). I know T but not S.)</p>
<p>How can I make sure that the task is killed at a specific time?</p>
|
<python><celery>
|
2025-09-16 10:12:25
| 1
| 3,081
|
Guillaume
|
79,766,024
| 2,123,706
|
group_by with polars concatenating values
|
<p>I have a polars dataframe that I want to group by and concatenate the unique values in as a single entry.</p>
<p>in pandas, I go:</p>
<pre><code>def unique_colun_values(x):
return('|'.join(set(x)))
dd=pd.DataFrame({'col1':[1,1,2,3,4],'col2':['a','a','a','b','b'],'col3':['qwe','rty','asd','fgh','zxc']})
dd.groupby('col1').agg(lambda x: unique_colun_values(x))
</code></pre>
<p>This works fine</p>
<p>when I try to implement in polars:</p>
<pre><code>pl.from_pandas(dd).group_by('col1').agg(lambda x: unique_colun_values(x), allow_object=True)
</code></pre>
<p>I get the following error:</p>
<pre><code>TypeError: cannot create expression literal for value of type function.
Hint: Pass `allow_object=True` to accept any value and create a literal of type Object.
</code></pre>
<p>Am I missing something?</p>
|
<python><dataframe><group-by><python-polars>
|
2025-09-16 09:16:53
| 1
| 3,810
|
frank
|
79,765,921
| 4,948,719
|
Python Protocol write-only attributes
|
<p>The <a href="https://typing.python.org/en/latest/spec/protocol.html" rel="nofollow noreferrer">Python documentation on Protocols</a> (<code>typing.Protocol</code>) mentions the following:</p>
<blockquote>
<p>By default, protocol variables as defined above are considered readable and writable. To define a read-only protocol variable, one can use an (abstract) property</p>
</blockquote>
<p>Making a protocol variable read-only is useful because it becomes covariant. So for example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Protocol
class A(): pass
class B(A): pass
# protocols
class HasA(Protocol):
a: A # read-write
class HasA_RO(Protocol):
@property
def a(self) -> A: ... # read-only
#implementation
class Impl():
a: B
</code></pre>
<p>Then <code>Impl</code> implements <code>HasA_RO</code>, but not <code>HasA</code>. This is because</p>
<pre class="lang-py prettyprint-override"><code>impl1: HasA = Impl() # type error
impl2: HasA_RO = Impl() # ok
a = A()
impl1.a = a # valid for HasA, but not for Impl1
impl2.a = a # is, correctly, not allowed !
</code></pre>
<p>Now, I would like to know if there is a way to make a protocol with <strong>write-only</strong> variables.</p>
<p>This would be useful because the variable would then be contravariant. For example</p>
<pre class="lang-py prettyprint-override"><code>class HasB_WO(Protocol):
b: WriteOnly[B] ...
class Impl2():
b: A
i2: HasB_WO = Impl2() # valid because you can always write a value of type Bโฏto a field of type A
</code></pre>
|
<python><python-typing>
|
2025-09-16 07:53:20
| 0
| 1,834
|
tbrugere
|
79,765,883
| 11,863,823
|
Static typing for an `Enum` that contains compiled regex patterns
|
<p>I am in a scenario (that could be compared to writing an AST) where I compile multiple regex patterns that I want to reuse later, and for the sake of simplicity and readability I want to store them in an <code>Enum</code>. It works properly:</p>
<pre class="lang-py prettyprint-override"><code>import re
import enum
class Patterns(enum.Enum):
"""
Patterns used to read type declarations in `xyz` files.
"""
Simple: re.Pattern = re.compile(r"([\w_]+)\s*[\|;]")
Complex: re.Pattern = re.compile(r"([\w_]+)\s*:\s*([\w_]+)")
</code></pre>
<p>When I want to use them, I'll for instance write:</p>
<pre class="lang-py prettyprint-override"><code>text = "{ a : alpha; b : bravo; c : charlie}"
pattern = Patterns.Complex
print(dict(pattern.findall(text))) # doesn't work
</code></pre>
<p>However, <code>pattern</code> being of type <code><Enum: Patterns></code>, <code>pattern</code> does not have a <code>findall</code> method. For <code>int</code> or <code>str</code> (or many other types), this is fixed by subclassing both <code>Enum</code> and <code>str</code> for instance:</p>
<pre class="lang-py prettyprint-override"><code>class color(str, enum.Enum):
R = "red"
G = "green"
B = "blue"
print(color.B.upper())
# BLUE
</code></pre>
<p>In Python, regex patterns obtained via <code>re.compile(pattern)</code> are of type <code>re.Pattern</code>, that is marked as <code>typing.Final</code> and therefore should not be subclassed (you still can of course, but static type checkers will heavily complain about it). A workaround is to extract the <code>value</code> of the <code>Enum</code> member, but that's not the most convenient to use:</p>
<pre class="lang-py prettyprint-override"><code>print(dict(pattern.value.findall(text))) # works
</code></pre>
<p>I'm trying to find a better, cleaner way to do this.</p>
|
<python><enums><python-typing>
|
2025-09-16 07:18:59
| 0
| 628
|
globglogabgalab
|
79,765,711
| 5,046,485
|
How can I type a dict to match a generic `Callable`โs `ParamSpec.kwargs`?
|
<p>Iโm trying to write a small utility that forwards a dict of keyword arguments to an arbitrary callable, and Iโd like static type checking to verify that the dictโs keys/values match the callableโs keyword parameters.</p>
<pre class="lang-py prettyprint-override"><code>def apply[R](fn: Callable[..., R], inputs: dict[str, Any]) -> R:
return fn(**inputs)
</code></pre>
<p>This works at runtime, but type checkers donโt relate the <code>dict</code> type to the <code>Callable</code> so incorrect keys/values wonโt be flagged.</p>
<p>What I <em>would like</em> is something like the following:</p>
<pre class="lang-py prettyprint-override"><code>def apply[**P, R](fn: Callable[P, R], inputs: P.kwargs) -> R:
return fn(**inputs)
</code></pre>
<p>The standard <code>*args/**kwargs</code> forwarding works, but it changes the call site:</p>
<pre class="lang-py prettyprint-override"><code>def apply[**P, R](fn: Callable[P, R], /, *args: P.args, **kwargs: P.kwargs) -> R:
return fn(*args, **kwargs)
</code></pre>
<p>This type-checks perfectly. However, it forces callers to use <code>apply(fn, **inputs)</code> instead of <code>apply(fn, inputs)</code>. Iโm specifically asking whether <code>inputs</code> can be a <em>single</em> dict parameter thatโs type-checked against <code>P.kwargs</code>. <em>Also type checkers report that I need to include both <code>args</code> and <code>kwargs</code> attributes of <code>ParamSpec</code>, even though I only want to support <code>kwargs</code>.</em></p>
<p>Ideally, it would type check like this:</p>
<pre class="lang-py prettyprint-override"><code>def f(*, a: int, b: str) -> tuple[int, str]:
return (a, b)
good = apply(f, {"a": 1, "b": "x"}) # should type-check
bad = apply(f, {"a": "oops"}) # should be a type error
extra = apply(f, {"a": 1, "b": "x", "c": 0}) # should be a type error
</code></pre>
<p>Is there any way to make <code>inputs</code>โs type track <code>fn</code>โs <code>P.kwargs</code> without changing the call site to <code>apply(fn, **inputs)</code>?</p>
|
<python><python-typing>
|
2025-09-16 01:45:00
| 0
| 619
|
grahamcracker1234
|
79,765,591
| 1,028,270
|
Is it possible to set the type of a field based on the value of another field?
|
<p>I want to set and instantiate the right type based on the value of another field and get typing and autocomplete on it.</p>
<p>I want this to happen automatically when the class is instantiated:</p>
<pre class="lang-py prettyprint-override"><code>TMyBackendConfig = TypeVar("TMyBackendConfig")
class MyBackendOneConfig(BaseSettings):
field_one: str = "1111111"
class MyBackendTwoConfig(BaseSettings):
field_two: str = "222222222"
class Config(BaseSettings, Generic[TMyBackendConfig]):
# backend_type: Literal["one", "two"] # want to use this to set right one
backend_config: TMyBackendConfig
def test():
_one = MyBackendOneConfig()
_two = MyBackendTwoConfig()
one = Config(backend_config=_one)
two = Config(backend_config=_two)
# I get autocomplete and typing
one.backend_config.field_one
two.backend_config.field_two
</code></pre>
<p>I want to be able to just do <code>Config(backend_type="one")</code> and get back the object with the right nested config.</p>
|
<python><python-typing><pydantic>
|
2025-09-15 20:34:16
| 1
| 32,280
|
red888
|
79,765,521
| 14,826,251
|
use existing env with new exe from pyinstaller
|
<p>I would like to reduce the volume of my pyinstaller generated exe that I have to deploy every time.</p>
<p>I create my app using pyinstaller.
when I use the onefile flag I get a ~750MB exe file. Without the onefile flag, the exe is ~50MB and the _internal folder ~1.6GB (650MB when as zipped).
I cannot remove dependencies to reduce the volume.</p>
<p>Is it possible to direct the exe to take the _internal folder from a specified location (for example: C:\Program Files\MyAppDeps v3_internal) such that I can deploy the _internal folder only once in a while (~once per quarter, when there is a new 3rd party dependency) and then distribute only the small exe file every time?</p>
<p>Users cannot copy the _internal/exe each time to the same directory.
Users cannot create symbolic links.</p>
<p>I want to do this because I update my app very frequently (several times per week) and deploy to machines that are not connected to the any network making the distribution process long and difficult, reducing the distribution volume will improve productivity greatly.</p>
|
<python><pyinstaller>
|
2025-09-15 19:07:18
| 4
| 1,125
|
SiP
|
79,765,484
| 4,470,365
|
Why does Pandera print failing rows with pa.check() and a lambda function but not on a column check?
|
<p>New to using Pandera. I want it to print the record(s) that fail the check. This is the simple check I want, fail when the system capacity is over 500:</p>
<pre class="lang-py prettyprint-override"><code>import pandera.pandas as pa
import pandas as pd
schema = pa.DataFrameSchema(
{
"TotalSystemCapacityRating": pa.Column(float, pa.Check.lt(500), nullable=False),
}
)
# Run the validation
print(schema.validate(completed_view, lazy=True))
</code></pre>
<p>It fails correctly, but only tells me the bad values, not details about the records associated with them:</p>
<blockquote>
<p>pandera.errors.SchemaErrors: {
"DATA": {
"DATAFRAME_CHECK": [
{
"schema": null,
"column": "TotalSystemCapacityRating",
"check": "less_than(500)",
"error": "Column 'TotalSystemCapacityRating' failed element-wise validator number 0: less_than(500) failure cases: 6440.0, 6000.0, 9240.0, 22550.0"
}
]
}
}</p>
</blockquote>
<p>Then I write a more-verbose version of the same check and it shows me the bad records:</p>
<pre class="lang-py prettyprint-override"><code>schema = pa.DataFrameSchema(
checks=[
pa.Check(
lambda df: ~(
(df["TotalSystemCapacityRating"] >= 500)
),
error="Must have TotalSystemCapacityRating < 500.",
),
],
)
print(schema.validate(completed_view, lazy=True))
</code></pre>
<p>It prints the entire dataframe for the four records that fail:</p>
<blockquote>
<p>pandera.errors.SchemaErrors: {
"DATA": {
"DATAFRAME_CHECK": [
{
"schema": null,
"column": null,
"check": "Must have TotalSystemCapacityRating < 500.",
"error": "DataFrameSchema 'None' failed element-wise validator number 0: <Check : Must have TotalSystemCapacityRating < 500.> failure cases: 75e4e529-005d-40d6-a094-2329927ba7e2, 3ed561a9-c4e2-46b7-9973-4e25317f038c, ... </p>
</blockquote>
<p>That printing is the behavior I want. But the code in the first block is a much simpler way of saying the same thing. Is there a way I can get the printing behavior of the second example with the concise code of the first?</p>
|
<python><pandas><pandera>
|
2025-09-15 18:24:03
| 1
| 23,346
|
Sam Firke
|
79,765,445
| 21,405,520
|
Get rows with unique value in a specific column in pandas
|
<p>Following is my data frame.</p>
<pre class="lang-none prettyprint-override"><code>id name class
--------------------------
0 Nick a
1 Jane b
2 Jacon a
3 Jack b
4 Cooze a
--------------------------
5 Nick b
6 Jane b
7 Jacon c
8 Jack a
9 Cooze a
--------------------------
10 John a
</code></pre>
<p>I need to get all names that have unique value in class column, and also add a new column to show the repetition of each name-class, i.e.:</p>
<pre class="lang-none prettyprint-override"><code>id name class count
-----------------------------------
0 Jane b 2
1 Cooze a 2
2 John a 1
</code></pre>
<p>Update: 'John' is added because it is only mentioned once in the name column, therefore, it has a unique value in the class column.</p>
<p>What I have done:</p>
<pre><code>data = {
'name': ['Nick', 'Jane', 'Jacon', 'Jack', 'Cooze', 'Nick', 'Jane', 'Jacon', 'Jack', 'Cooze', 'John'],
'class': ['a', 'b', 'a', 'b', 'a', 'b', 'b', 'c', 'a', 'a', 'a']}
df = pd.DataFrame(data)
new_data = df.groupby('name')['class'].unique()
print(new_data)
</code></pre>
<p>What I get is:</p>
<pre class="lang-none prettyprint-override"><code>name
Cooze [a]
Jack [b, a]
Jacon [a, c]
Jane [b]
John [a]
Nick [a, b]
Name: class, dtype: object
</code></pre>
<p>In the next step I have to get the length of objects in class column, and return the ones that equals 1. But first of all, I get different errors when I try to get the length of objects in the class column, and second, I have the feeling it must be easier than this! I am very beginner in pandas, does anyone know how can I get it?</p>
|
<python><pandas>
|
2025-09-15 17:34:55
| 4
| 621
|
user6781
|
79,765,416
| 3,749,646
|
Matplotlib show time ticks every two hours with minute offset
|
<p>I want to make a graph with Matplotlib. I want the X-axis ticks to be every 2 hours and aligned with my data.</p>
<p>Using ser_major_locator(mdates.MinuteLocator()) may accept interval=120 to make ticks every two hours, but this does not align with the data. MinuteLocator() will accept a range to byminute which satisfies the offset desire, but this will place ticks every hour.</p>
<p>Here's an example:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from datetime import datetime, timedelta
start_time = datetime(2025, 9,15, 12, 37)
t = [start_time, start_time + timedelta(hours=2), start_time + timedelta(hours=4)]
values = [4,2,7]
plt.plot(t,values)
ax = plt.gca()
ax.xaxis.set_major_locator(mdates.MinuteLocator(interval=120))
plt.grid(True)
plt.show()
</code></pre>
|
<python><matplotlib>
|
2025-09-15 16:48:20
| 1
| 430
|
peteey
|
79,765,297
| 1,900,384
|
Does mypy not consider "bool(object) == True"?
|
<p>If Python classes do not define <code>__bool__</code> or <code>__len__</code>, <code>bool(<class object>)</code> <a href="https://docs.python.org/3/library/stdtypes.html" rel="nofollow noreferrer">defaults to <code>True</code></a>.</p>
<p>However, mypy (tested with: <code>v1.15.0</code>) doesn't seem to consider this.</p>
<pre><code>class A:
def __init__(self) -> None:
self._value = 42
def foobar(self) -> int:
return self._value
def foobar(a: A | None) -> int | None:
# case: `a is None` โ returns `None`
# case: `a is not None` โ returns `42`
# however: mypy complains...
# `Incompatible return value type (got A | int | None), expected "int | None"`
return a and a.foobar()
</code></pre>
<p>To my understanding, the expression <code>a and a.foobar()</code> can only evaluate to type <code>A</code> if <code>a is not None and bool(a) == False</code>, which cannot happen. My conclusion is that mypy doesn't consider that <code>a is not None</code> โ <code>bool(a) == True</code>.</p>
<p>Am I right that mypy simply fails on this? Is there any convenient workaround to write <code>optional_obj and optional_obj.attribute</code> in a mypy-compatible way?</p>
|
<python><python-typing><mypy>
|
2025-09-15 14:50:08
| 2
| 2,201
|
matheburg
|
79,765,276
| 6,662,425
|
How can I efficiently get both a column and a scalar using Polars expressions?
|
<p>Polars suggests the usage of Expressions to avoid eager execution and then execute all expressions together at the very end.
I am unsure how this is possible if I want a column and a scalar. For example let's say I start with a single column <code>'test'</code> and want to calculate its mean and produce a centered column. It is trivial to express this using expressions:</p>
<pre class="lang-py prettyprint-override"><code>>>> import polars as pl
>>> import numpy as np
>>> df = pl.DataFrame({"test": np.array([0.,1,2,3,4])})
>>> df
shape: (5, 1)
โโโโโโโโ
โ test โ
โ --- โ
โ f64 โ
โโโโโโโโก
โ 0.0 โ
โ 1.0 โ
โ 2.0 โ
โ 3.0 โ
โ 4.0 โ
โโโโโโโโ
>>> mean = pl.col('test').mean().alias('mean')
>>> df.select(mean)
shape: (1, 1)
โโโโโโโโ
โ mean โ
โ --- โ
โ f64 โ
โโโโโโโโก
โ 2.0 โ
โโโโโโโโ
>>> centered = pl.col('test') - mean
>>> df.select(centered)
shape: (5, 1)
โโโโโโโโ
โ test โ
โ --- โ
โ f64 โ
โโโโโโโโก
โ -2.0 โ
โ -1.0 โ
โ 0.0 โ
โ 1.0 โ
โ 2.0 โ
โโโโโโโโ
</code></pre>
<p>Of course you could select them both, but then the mean gets broadcasted over all rows which does not seem storage efficient. Is there a good way to obtain both the column and the scalar?</p>
<p>In this case the best thing to do may be to calculate the mean eagerly and then proceed with the centering. But of course this might not work as well for more general cases.</p>
|
<python><dataframe><python-polars>
|
2025-09-15 14:32:36
| 3
| 1,373
|
Felix Benning
|
79,765,206
| 5,349,916
|
Type hint empty dict
|
<p>I need to type-hint that a value is either some <code>TypedDict</code> or a completely empty <code>dict</code>. The <code>TypedDict</code> itself already exists and is non-trivial, and it's an all-or-nothing situation โ so modifying the <code>TypedDict</code> to have optional keys is not sufficient.<br />
Both of these are used in a context where its important whether I have one or the other (because they are handled differently) and in a context where <em>any</em> dict with the key-value types of the <code>TypedDict</code> is acceptable (e.g. serialising to JSON).</p>
<p><strong>How can I type-hint the "completely empty <code>dict</code>" part?</strong></p>
<p>The expectation is that this a) signals that no <em>specific</em> key can be read or written but b) still be read in any place an arbitrary-size <code>Mapping</code> of specific key-value types is expected.</p>
<p>Since I already use <code>TypedDict</code>, defining an empty <code>TypedDict</code> seems natural. However, neither MyPy nor PyRight meaningfully accept this: the resulting type cannot actually be used even as a trivial <code>Mapping</code>.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict, Mapping
class EmptyDict(TypedDict): pass
def foo(bar: Mapping[str, float]) -> None:...
foo(EmptyDict())
</code></pre>
<p><a href="https://mypy-play.net/?mypy=latest&python=3.12&flags=strict&gist=667d91cb027bfe596dc05ac77d303399" rel="nofollow noreferrer">MyPy rejects this with:</a></p>
<blockquote>
<pre><code>error: Argument 1 to "foo" has incompatible type "EmptyDict"; expected "Mapping[str, float]" [arg-type]
</code></pre>
</blockquote>
<p><a href="https://pyright-play.net/?code=GYJw9gtgBALgngBwJYDsDmUkQWEMoAqiApgCYAiSAxjADRQCyAhgsugLABQXVANkwGcBUAKLZ4lGgAoiCMpJgBKAFxQEggV1LFgUYGDBSARkxCrmrVGgDaAmCHrBeYJjAC6KgHTeuXfYbEECWoYKUVFLiA" rel="nofollow noreferrer">PyRight rejects this with:</a></p>
<blockquote>
<pre><code>Argument of type "EmptyDict" cannot be assigned to parameter "bar" of type "Mapping[str, float]" in function "foo"
"EmptyDict" is not assignable to "Mapping[str, float]"
Type parameter "_VT_co@Mapping" is covariant, but "object" is not a subtype of "float"
"object" is not assignable to "float" (reportArgumentType)
</code></pre>
</blockquote>
|
<python><python-typing>
|
2025-09-15 13:30:55
| 3
| 53,360
|
MisterMiyagi
|
79,765,153
| 5,118,421
|
python kafka doesn't see headers/metadata while kafka ui and kafka go lang does
|
<p>Kafka-python doesn't see kafka-headers, while kafka golang does.
It doesn't see any headers in all the messages.</p>
<p><strong>Example of code:</strong></p>
<pre><code>for message in consumer:
# message value and key are raw bytes -- decode if necessary!
# e.g., for unicode: `message.value.decode('utf-8')`
print(
"%s:%d:%d: key=%s headers=%s"
% (
message.topic,
message.partition,
message.offset,
message.key,
"".join(message.headers),
)
)
</code></pre>
<p><strong>Expected:</strong>
<a href="https://i.sstatic.net/JfzyBf62.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfzyBf62.png" alt="kafka ui shows kafka headers for the message" /></a>
<a href="https://i.sstatic.net/65j6roxB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65j6roxB.png" alt="kafka-go see headers" /></a></p>
<p><strong>Actual:</strong>
<a href="https://i.sstatic.net/2fVNn6EM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fVNn6EM.png" alt="kafk-python doesn't see any headers" /></a></p>
<p><strong>Raw:</strong></p>
<pre><code>ConsumerRecord(topic='mz-core-document-manager-responses', partition=2, leader_epoch=-1, offset=***, timestamp=1757941918771, timestamp_type=0, key=b'***', value=b'***', headers=[], checksum=***, serialized_key_size=93, serialized_value_size=1127, serialized_header_size=-1)
</code></pre>
<p><strong>Versions:</strong></p>
<p>python: <code>3.13/3.10</code>
kafka-python: <code>@ git+https://github.com/dpkp/kafka-python.git@512d0a0b8d71cf7f34f1b23f8a42d52c28af3266 /^2.0.2</code></p>
|
<python><apache-kafka><kafka-python>
|
2025-09-15 12:46:33
| 2
| 1,407
|
Irina
|
79,764,974
| 10,982,755
|
Do I have to copy every workspace member in the Dockerfile to be able to use `uv sync --locked`
|
<p>I'm currently in the process of migrating an existing codebase in my org to use uv as a package manager. It's a monorepo and having the ability to setup workspaces within a single repo was amazing. I'm still running into an issue for building a Docker image with the uv.lock file.</p>
<p>Below is my folder structure.</p>
<pre><code>org-backend
โโโ src
โโโ base
โโโ ..
โโโ pyproject.toml
โโโ bo
โโโ ..
โโโ pyproject.toml
โโโ application
โโโ ..
โโโ pyproject.toml
โโโ pyproject.toml
</code></pre>
<p>Here's the root <code>pyproject.toml</code></p>
<pre><code>[project]
name = "org-backend"
version = "0.1.0"
description = ".."
readme = "README.md"
requires-python = "==3.12.*"
[tool.uv.workspace]
members = ["src/base", "src/bo", "application"]
</code></pre>
<p>The root <code>pyproject.toml</code> contains <code>base</code>,<code>bo</code> and <code>application</code> as a workspace member. While building the Docker image for application service, I only copy the following <code>/src/base</code> and <code>/application</code>.</p>
<pre><code># Dockerfile for application
RUN pip install uv==0.8.17
COPY ./src/base /org-backend/src/base
COPY application /org-backend/application
COPY pyproject.toml /org-backend/
COPY uv.lock /org-backend/
WORKDIR /org-backend
RUN uv sync --locked --active --no-group dev --package application
</code></pre>
<p>Here's the error message when building with above docker file, <code>The lockfile at 'uv.lock' needs to be updated, but '--locked' was provided. To update the lockfile, run 'uv lock'.</code></p>
<p>My question is do I have to copy every workspace member? If not, what would be the best approach to build an image?</p>
<p>If yes, this is going to be a hassle since I have 4-5 packages and over 15 services. Everything will be defined as a workspace member and the 15 services will not even be related to each other. Having to copy every workspace member is going to wreak havoc and also contain unnecessary code!</p>
|
<python><docker><uv>
|
2025-09-15 09:43:14
| 1
| 617
|
Vaibhav
|
79,764,955
| 11,159,734
|
Pytest in FastAPI + Postgres results in: <sys>:0: RuntimeWarning: coroutine 'Connection._cancel' was never awaited
|
<p>I'm writing tests for my fastapi application that uses asynchronous posgtres connection:</p>
<pre><code># backend/database/session.py
from sqlmodel import SQLModel
from sqlmodel.ext.asyncio.session import AsyncSession
from sqlalchemy.ext.asyncio import create_async_engine
from sqlalchemy.orm import sessionmaker
from config import config
engine = create_async_engine(
config.db_uri,
echo=config.db_echo,
# Connection pool configuration for scalability
# Number of connections to maintain in pool
pool_size=config.db_pool_size,
# Additional connections when pool is full
max_overflow=config.db_max_overflow,
# Validate connections before use
pool_pre_ping=True,
# Recycle connections after specified time
pool_recycle=config.db_pool_recycle,
# Timeout waiting for available connection
pool_timeout=config.db_pool_timeout,
# Reset connection state on return
pool_reset_on_return='commit',
# Performance optimizations
# Don't log pool operations (set to True for debugging)
echo_pool=False,
# Connection arguments
connect_args={
"ssl": config.db_ssl
}
)
async def init_db():
async with engine.begin() as conn:
await conn.run_sync(SQLModel.metadata.create_all)
async def get_session() -> AsyncSession: # type: ignore
Session = sessionmaker(
bind=engine,
class_=AsyncSession,
expire_on_commit=False,
autocommit=False,
autoflush=False
)
async with Session() as session:
yield session
</code></pre>
<p>My app, db connection and api routes work fine. My tests however are not. To be more precise all tests that require a <code>db_session</code> do not work correctly.</p>
<p>I run my tests effectively in a python file via the <code>subprocess</code> module because I'm updating the env variables before each run to point to a different test database and run the migrations.</p>
<pre><code># backend/tests/run_tests.py
# <Update env and run alembic upgrade head>
# Run the integration tests
def run_tests(env):
print("Running unit tests...\n")
try:
# subprocess.run("ls")
subprocess.run("pytest -s --color=yes",
shell=True, check=True, text=True, env=env)
except subprocess.CalledProcessError as e:
print(f"Error when running tests: {e}")
pass
print("\nTests completed.")
# <Run alembic downgrade base>
</code></pre>
<p>Here is one test that requires the db session and will fail:</p>
<pre><code># backend/tests/user/test_signup.py
import pytest
from httpx import AsyncClient
from httpx._transports.asgi import ASGITransport
from main import app
@pytest.mark.asyncio
async def test_signup_successful():
"""Test user signup with valid data"""
# Use ASGITransport explicitly
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as client:
# Define the request payload
payload = {
"first_name": "Test",
"last_name": "User",
"email": "integration_testuser@example.com",
"password": "Strongpassword123-"
}
# Perform POST request
response = await client.post("/user/signup", json=payload)
# Assertions
assert response.status_code == 201
data = response.json()
assert data["email"] == payload["email"]
</code></pre>
<p>This is the traceback I get (cut down to the last 40% of the actual traceback due to stack character limit):</p>
<pre><code> File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/core/user/helper.py", line 52, in _get_users
result = await session.exec(statement)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlmodel/ext/asyncio/session.py", line 81, in exec
result = await greenlet_spawn(
^^^^^^^^^^^^^^^^^^^^^
...<7 lines>...
)
^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 201, in greenlet_spawn
result = context.throw(*sys.exc_info())
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlmodel/orm/session.py", line 66, in exec
results = super().execute(
statement,
...<4 lines>...
_add_event=_add_event,
)
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/orm/session.py", line 2365, in execute
return self._execute_internal(
~~~~~~~~~~~~~~~~~~~~~~^
statement,
^^^^^^^^^^
...<4 lines>...
_add_event=_add_event,
^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/orm/session.py", line 2241, in _execute_internal
conn = self._connection_for_bind(bind)
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/orm/session.py", line 2110, in _connection_for_bind
return trans._connection_for_bind(engine, execution_options)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 2, in _connection_for_bind
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/orm/state_changes.py", line 137, in _go
ret_value = fn(self, *arg, **kw)
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/orm/session.py", line 1189, in _connection_for_bind
conn = bind.connect()
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 3277, in connect
return self._connection_cls(self)
~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 143, in __init__
self._dbapi_connection = engine.raw_connection()
~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py", line 3301, in raw_connection
return self.pool.connect()
~~~~~~~~~~~~~~~~~^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/pool/base.py", line 447, in connect
return _ConnectionFairy._checkout(self)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/pool/base.py", line 1363, in _checkout
with util.safe_reraise():
~~~~~~~~~~~~~~~~~^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/util/langhelpers.py", line 224, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/pool/base.py", line 1301, in _checkout
result = pool._dialect._do_ping_w_event(
fairy.dbapi_connection
)
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/engine/default.py", line 728, in _do_ping_w_event
return self.do_ping(dbapi_connection)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 1169, in do_ping
dbapi_connection.ping()
~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 813, in ping
self._handle_exception(error)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 794, in _handle_exception
raise error
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 811, in ping
_ = self.await_(self._async_ping())
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
value = await result
^^^^^^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 820, in _async_ping
await tr.start()
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/asyncpg/transaction.py", line 146, in start
await self._connection.execute(query)
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/asyncpg/connection.py", line 349, in execute
result = await self._protocol.query(query, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "asyncpg/protocol/protocol.pyx", line 375, in query
RuntimeError: Task <Task pending name='starlette.middleware.base.BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.coro' coro=<BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.coro() running at /Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/starlette/middleware/base.py:144> cb=[TaskGroup._spawn.<locals>.task_done() at /Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/anyio/_backends/_asyncio.py:794]> got Future <Future pending cb=[BaseProtocol._on_waiter_completed()]> attached to a different loop
During handling of the above exception, another exception occurred:
session = <sqlalchemy.orm.session.AsyncSession object at 0x1100ebe00>
@pytest.mark.asyncio
async def test_signup_successful(session):
"""Test user signup with valid data"""
# Use ASGITransport explicitly
transport = ASGITransport(app=app)
async with AsyncClient(transport=transport, base_url="http://test") as client:
# Define the request payload
payload = {
"first_name": "Test",
"last_name": "User",
"email": "integration_testuser@example.com",
"password": "Strongpassword123-"
}
# Perform POST request
> response = await client.post("/user/signup", json=payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
tests/user/test_user_signup.py:35:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.venv/lib/python3.13/site-packages/httpx/_client.py:1859: in post
return await self.request(
.venv/lib/python3.13/site-packages/httpx/_client.py:1540: in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/httpx/_client.py:1629: in send
response = await self._send_handling_auth(
.venv/lib/python3.13/site-packages/httpx/_client.py:1657: in _send_handling_auth
response = await self._send_handling_redirects(
.venv/lib/python3.13/site-packages/httpx/_client.py:1694: in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/httpx/_client.py:1730: in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/httpx/_transports/asgi.py:170: in handle_async_request
await self.app(scope, receive, send)
.venv/lib/python3.13/site-packages/fastapi/applications.py:1054: in __call__
await super().__call__(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/applications.py:113: in __call__
await self.middleware_stack(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/middleware/errors.py:186: in __call__
raise exc
.venv/lib/python3.13/site-packages/starlette/middleware/errors.py:164: in __call__
await self.app(scope, receive, _send)
.venv/lib/python3.13/site-packages/starlette/middleware/base.py:182: in __call__
with recv_stream, send_stream, collapse_excgroups():
^^^^^^^^^^^^^^^^^^^^
/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/contextlib.py:162: in __exit__
self.gen.throw(value)
.venv/lib/python3.13/site-packages/starlette/_utils.py:83: in collapse_excgroups
raise exc
.venv/lib/python3.13/site-packages/starlette/middleware/base.py:184: in __call__
response = await self.dispatch_func(request, call_next)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
middleware.py:27: in execution_timer
response = await call_next(request)
^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/starlette/middleware/base.py:159: in call_next
raise app_exc
.venv/lib/python3.13/site-packages/starlette/middleware/base.py:144: in coro
await self.app(scope, receive_or_disconnect, send_no_error)
.venv/lib/python3.13/site-packages/starlette/middleware/trustedhost.py:36: in __call__
await self.app(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/middleware/cors.py:85: in __call__
await self.app(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/middleware/exceptions.py:63: in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/_exception_handler.py:53: in wrapped_app
raise exc
.venv/lib/python3.13/site-packages/starlette/_exception_handler.py:42: in wrapped_app
await app(scope, receive, sender)
.venv/lib/python3.13/site-packages/starlette/routing.py:716: in __call__
await self.middleware_stack(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/routing.py:736: in app
await route.handle(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/routing.py:290: in handle
await self.app(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/routing.py:78: in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
.venv/lib/python3.13/site-packages/starlette/_exception_handler.py:53: in wrapped_app
raise exc
.venv/lib/python3.13/site-packages/starlette/_exception_handler.py:42: in wrapped_app
await app(scope, receive, sender)
.venv/lib/python3.13/site-packages/starlette/routing.py:75: in app
response = await f(request)
^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/fastapi/routing.py:302: in app
raw_response = await run_endpoint_function(
.venv/lib/python3.13/site-packages/fastapi/routing.py:213: in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/slowapi/extension.py:734: in async_wrapper
response = await func(*args, **kwargs) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
api/user/router.py:50: in signup
user_exists = await service.user_exists(email=email, session=session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
core/user/service.py:67: in user_exists
user = await self.get_user_by_email(email, session)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
core/user/service.py:39: in get_user_by_email
return await service_helper._get_users(session=session, where_clause=User.email == email, include_roles=include_roles, include_permissions=include_permissions)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
core/user/helper.py:52: in _get_users
result = await session.exec(statement)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlmodel/ext/asyncio/session.py:81: in exec
result = await greenlet_spawn(
.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py:201: in greenlet_spawn
result = context.throw(*sys.exc_info())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlmodel/orm/session.py:66: in exec
results = super().execute(
.venv/lib/python3.13/site-packages/sqlalchemy/orm/session.py:2365: in execute
return self._execute_internal(
.venv/lib/python3.13/site-packages/sqlalchemy/orm/session.py:2241: in _execute_internal
conn = self._connection_for_bind(bind)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/orm/session.py:2110: in _connection_for_bind
return trans._connection_for_bind(engine, execution_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
<string>:2: in _connection_for_bind
???
.venv/lib/python3.13/site-packages/sqlalchemy/orm/state_changes.py:137: in _go
ret_value = fn(self, *arg, **kw)
^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/orm/session.py:1189: in _connection_for_bind
conn = bind.connect()
^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py:3277: in connect
return self._connection_cls(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py:143: in __init__
self._dbapi_connection = engine.raw_connection()
^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/engine/base.py:3301: in raw_connection
return self.pool.connect()
^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/pool/base.py:447: in connect
return _ConnectionFairy._checkout(self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/pool/base.py:1363: in _checkout
with util.safe_reraise():
^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/util/langhelpers.py:224: in __exit__
raise exc_value.with_traceback(exc_tb)
.venv/lib/python3.13/site-packages/sqlalchemy/pool/base.py:1301: in _checkout
result = pool._dialect._do_ping_w_event(
.venv/lib/python3.13/site-packages/sqlalchemy/engine/default.py:728: in _do_ping_w_event
return self.do_ping(dbapi_connection)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py:1169: in do_ping
dbapi_connection.ping()
.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py:813: in ping
self._handle_exception(error)
.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py:794: in _handle_exception
raise error
.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py:811: in ping
_ = self.await_(self._async_ping())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py:132: in await_only
return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py:196: in greenlet_spawn
value = await result
^^^^^^^^^^^^
.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py:820: in _async_ping
await tr.start()
.venv/lib/python3.13/site-packages/asyncpg/transaction.py:146: in start
await self._connection.execute(query)
.venv/lib/python3.13/site-packages/asyncpg/connection.py:349: in execute
result = await self._protocol.query(query, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E RuntimeError: Task <Task pending name='starlette.middleware.base.BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.coro' coro=<BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.coro() running at /Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/starlette/middleware/base.py:144> cb=[TaskGroup._spawn.<locals>.task_done() at /Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/anyio/_backends/_asyncio.py:794]> got Future <Future pending cb=[BaseProtocol._on_waiter_completed()]> attached to a different loop
asyncpg/protocol/protocol.pyx:375: RuntimeError
--------------------------------------------------------------------- Captured log call ----------------------------------------------------------------------
ERROR sqlalchemy.pool.impl.AsyncAdaptedQueuePool:base.py:376 Exception terminating connection <AdaptedConnection <asyncpg.connection.Connection object at 0x1101dc500>>
Traceback (most recent call last):
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/pool/base.py", line 372, in _close_connection
self._dialect.do_terminate(connection)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 1136, in do_terminate
dbapi_connection.terminate()
~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py", line 907, in terminate
self.await_(asyncio.shield(self._connection.close(timeout=2)))
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 132, in await_only
return current.parent.switch(awaitable) # type: ignore[no-any-return,attr-defined] # noqa: E501
~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 196, in greenlet_spawn
value = await result
^^^^^^^^^^^^
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/asyncpg/connection.py", line 1504, in close
await self._protocol.close(timeout)
File "asyncpg/protocol/protocol.pyx", line 627, in close
File "asyncpg/protocol/protocol.pyx", line 660, in asyncpg.protocol.protocol.BaseProtocol._request_cancel
File "/Users/user/Documents/Programming/Python/Visual Studio Code/rag-sample/backend/.venv/lib/python3.13/site-packages/asyncpg/connection.py", line 1673, in _cancel_current_command
self._cancellations.add(self._loop.create_task(self._cancel(waiter)))
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 466, in create_task
self._check_closed()
~~~~~~~~~~~~~~~~~~^^
File "/opt/homebrew/Cellar/python@3.13/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 556, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
================================================================== short test summary info ===================================================================
FAILED tests/user/test_user_signup.py::test_signup_successful - RuntimeError: Task <Task pending name='starlette.middleware.base.BaseHTTPMiddleware.__call__.<locals>.call_next.<locals>.coro' coro=<BaseHTTPMiddleware._...
================================================================ 1 failed, 11 passed in 1.28s ================================================================
<sys>:0: RuntimeWarning: coroutine 'Connection._cancel' was never awaited
Error when running tests: Command 'pytest -s --color=yes' returned non-zero exit status 1.
</code></pre>
<p>Here is also the <a href="https://github.com/Daniel-Fauland/rag-sample/tree/main/backend/tests" rel="nofollow noreferrer">GitHub repo</a> if you need more reference.</p>
<p>I also saw this <a href="https://github.com/pytest-dev/pytest-asyncio/issues/991" rel="nofollow noreferrer">GitHub issue</a> but couldn't really make it work and I'm also not 100% sure if that is the exact same error but it seems likely.</p>
|
<python><pytest><python-asyncio><fastapi><asyncpg>
|
2025-09-15 09:27:34
| 3
| 1,025
|
Daniel
|
79,764,845
| 10,953,274
|
TensorRT DLA Engine Build Fails for PWC-Net on Jetson NX - Missing Layer Support?
|
<p>I'm converting a PWC-Net optical flow model to run on Jetson NX DLA using the iSLAM framework, but the TensorRT engine build fails during DLA optimization.</p>
<h2>Environment</h2>
<ul>
<li><strong>Hardware</strong>: NVIDIA Jetson NX</li>
<li><strong>Framework</strong>: iSLAM (PyTorch-based SLAM system)</li>
<li><strong>TensorRT</strong>: 8.2.1</li>
<li><strong>CUDA</strong>: 11.4</li>
<li><strong>Model</strong>: PWC-Net for optical flow estimation</li>
</ul>
<h2>File Structure</h2>
<pre><code>iSLAM/
โโโ models/stereo_cvt_tartanvo_1914.pkl
โโโ Network/
โ โโโ convert_dla_final.py # Place conversion script here
โ โโโ PWC.py
โ โโโ dla_module_wrapper.py # Place DLA wrapper here
</code></pre>
<h2>Conversion Code</h2>
<p><strong>File: <code>Network/convert_dla_final.py</code></strong></p>
<pre class="lang-py prettyprint-override"><code>def build_tensorrt_engine(onnx_path):
import tensorrt as trt
logger = trt.Logger(trt.Logger.WARNING)
builder = trt.Builder(logger)
network = builder.create_network(1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
parser = trt.OnnxParser(network, logger)
with open(onnx_path, 'rb') as model_file:
parser.parse(model_file.read())
config = builder.create_builder_config()
config.max_workspace_size = 1 << 30
# DLA configuration - THIS IS WHERE IT FAILS
if builder.num_DLA_cores > 0:
config.default_device_type = trt.DeviceType.DLA
config.DLA_core = 0
config.flags |= 1 << int(trt.BuilderFlag.FP16)
config.flags |= 1 << int(trt.BuilderFlag.GPU_FALLBACK)
engine = builder.build_engine(network, config) # Returns None
return engine
</code></pre>
<h2>Error Output</h2>
<pre><code>[TensorRT] ERROR: DLA does not support layer: PWN_/conv1a/Conv
[TensorRT] ERROR: DLA does not support layer: PWN_/leaky_relu_1/LeakyRelu
[TensorRT] ERROR: Network validation failed.
</code></pre>
<h2>Model Architecture</h2>
<p>The PWC-Net uses these layer types:</p>
<ul>
<li>Conv2d layers (conv1a through conv6a)</li>
<li>LeakyReLU activations</li>
<li>Correlation layers for cost volume</li>
<li>Warping operations</li>
<li>Upsampling layers</li>
</ul>
<h2>Specific Question</h2>
<p><strong>Which PWC-Net layers are incompatible with Jetson NX DLA</strong>, and how do I configure TensorRT to automatically fall back to GPU for unsupported operations while keeping supported layers on DLA?</p>
<p>The error suggests Conv2d and LeakyReLU should be supported on DLA, but the build still fails. I've enabled <code>GPU_FALLBACK</code> but the engine build returns None instead of a mixed DLA/GPU engine.</p>
<h2>Expected Behavior</h2>
<p>Engine should build successfully with DLA-compatible layers on DLA and incompatible layers automatically falling back to GPU execution.</p>
|
<python><machine-learning><nvidia><opticalflow><tensorrt>
|
2025-09-15 07:33:55
| 0
| 705
|
Unknown
|
79,764,832
| 9,021,547
|
How to access params string within a DAG run in Airflow?
|
<p>I have a dag, which runs several sql scripts on certain tables. There are two options to run this dag:</p>
<ol>
<li>On the production tables</li>
<li>On frozen archived tables</li>
</ol>
<p>I want to be able to select which tables to use based on a dag_run param value. The general structure is something like this:</p>
<pre><code>
from datetime import datetime, timedelta, date
from airflow import DAG
from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator
from airflow.models import DagRun
from airflow.decorators import task
data_dict = {
'prod':{'tab_1': 'table_name_1','tab_2': 'table_name_2'},
'arch':{'tab_1': 'table_name_1_arch', 'tab_2': 'table_name_2_arch'}
}
with DAG(
'sgk_test_2',
description='sgk_test_2',
tags=["sgk_test"],
schedule_interval=None,
start_date=datetime(2025, 7, 1),
default_args={
'retries': 0,
'retry_delay': timedelta(minutes=1),
'conn_id': 'sgk_gp_tau_pvr'
},
params={
'tab_type':'',
}
) as dag:
@task(task_id='task_0')
def get_type(**context):
params = context.get('params', {})
tab_type = params.get('tab_type')
return tab_type
tab_type = get_type()
task_1 = SQLExecuteQueryOperator(
task_id='task_1',
sql=f"select * from {data_dict[tab_type]['tab_1']}"
)
task_2 = SQLExecuteQueryOperator(
task_id='task_2',
sql=f"select * from {data_dict[tab_type]['tab_2']}"
)
task_0 >> task_1 >> task_2
</code></pre>
<p>The code above give the following error:</p>
<pre><code> sql=f"select * from {data_dict[tab_type]['tab_1']}"
~~~~~~~~~^^^^^^^^^^
TypeError: unhashable type: 'PlainXComArg'
</code></pre>
<p>After on google for a couple of hours I was unable to solve the problem. Please help.</p>
|
<python><airflow>
|
2025-09-15 07:09:16
| 2
| 421
|
Serge Kashlik
|
79,764,826
| 396,373
|
How to verify that obscure Python behavior is by design
|
<p>Iโve been experimenting in order to deepen my understanding of Python metaprogramming, and I have noticed a potentially useful behavior that I cannot find documentation to explain, so I donโt know whether it is safe to use and assume it will work in other Python versions and implementations.</p>
<p>Specifically, I have found that if I have attributes of the same name on a class and its metaclass, both containing data descriptors, then the metaclass descriptor will be used when the attribute is accessed through the class, and when it is accessed through an instance, the classโ descriptor is employed.</p>
<p>Does anyone know of any documentation that directly or indirectly explains why this behavior occurs? I have already looked in the regular documentation, including the data model, and searched for any PEPs that seem to cover it.</p>
<p>Code example:</p>
<pre class="lang-py prettyprint-override"><code>class ExampleDescriptor:
def __init__(self, name, getter_result):
self.name = name
self.getter_result = getter_result
def __get__(self, obj, type=None):
print(f'"{self.name}" getter invoked.')
return self.getter_result
def __set__(self, obj, value):
print(f'"{self.name}" setter invoked with value={value}.')
def __delete__(self, obj):
print(f'"{self.name}" deleter invoked.')
class MyMeta(type):
myattr = ExampleDescriptor('class level', 1)
class MyObj(metaclass=MyMeta):
myattr = ExampleDescriptor('instance level', 2)
print(MyObj.myattr)
MyObj.myattr = 3
del MyObj.myattr
myobj = MyObj()
print(myobj.myattr)
myobj.myattr = 4
del myobj.myattr
</code></pre>
<p>Example output:</p>
<pre><code>"class level" getter invoked.
1
"class level" setter invoked with value=3.
"class level" deleter invoked.
"instance level" getter invoked.
2
"instance level" setter invoked with value=4.
"instance level" deleter invoked.
</code></pre>
|
<python><language-lawyer><metaprogramming>
|
2025-09-15 07:04:29
| 1
| 12,777
|
Steve Jorgensen
|
79,764,680
| 1,844,397
|
Map a generic type to an instance of its type
|
<p>I have the following inheritance structure:</p>
<pre class="lang-py prettyprint-override"><code>class S:
...
class A(S):
...
class B(S):
...
</code></pre>
<p>I'd like to conceptually do something like the following:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
T = TypeVar('T', bound=S)
items: dict[type[T], T] = dict()
def get(self, t: type[T]) -> T:
return self.items[t]
</code></pre>
<p>In words, I have a <code>dict</code> that maps from subtypes (<code>A</code>, <code>B</code>) to instances of those subtypes (<code>A()</code>, <code>B()</code>). I want to type hint a method that wraps this dictionary, and I want static analysis to show that it always returns an instance of the input type specifically, not just an instance of the supertype.</p>
<p>How can I type hint this properly?</p>
|
<python><generics><python-typing>
|
2025-09-15 02:00:37
| 2
| 1,744
|
bossi
|
79,764,480
| 81,120
|
Recursively rename all column names and nested struct fields to lowercase in a Polars DataFrame?
|
<p>Is there a way for Polars to rename all columns, not just at the top level, but including multiple levels of nested structs?</p>
<p>I need them to all be lowercase via <code>str.lower</code></p>
|
<python><dataframe><python-polars>
|
2025-09-14 18:15:42
| 4
| 598
|
dsully
|
79,764,028
| 16,462,878
|
Validate a globed-like path
|
<p>I am writing a API for a program. The program has a native support of "globed" paths such as <code>img-0*.png</code>. I would like to know how to be sure that there is <em>at least</em> a file satisfying that pattern, a kind of <code>is_globable(path)</code> method.</p>
<pre><code>path = 'img-0*.png'
if os.path.isfile(path):
pass
elif is_globable(path):
pass
else:
raise Exception(...)
</code></pre>
<p>Neither <a href="https://docs.python.org/3/library/glob.html" rel="nofollow noreferrer"><code>glob</code></a> or <a href="https://docs.python.org/3/library/pathlib.html" rel="nofollow noreferrer"><code>pathlib</code></a> provide such a functionality.
The most related method I found is <a href="https://docs.python.org/3/library/pathlib.html#pathlib.PurePath.full_match" rel="nofollow noreferrer"><code>pathlib.PurePath.full_match</code></a> but solves the opposite problem.</p>
|
<python><validation><path><glob>
|
2025-09-14 00:28:18
| 3
| 5,264
|
cards
|
79,764,012
| 2,989,089
|
Make subclass use custom `__str__` in an f-string
|
<p>I wrote a subclass of Decimal to represent an amount of money. I wrote a custom <code>__str__</code> to display the currency along with a sign format. My method works when calling <code>str()</code> but in a f-string somehow my custom <code>__str__</code> is not used. What is happening here?</p>
<p>My goal is to have my custom <code>__str__</code> being used in f-string situations. I'm also interested in understanding what is happening here. Something is going on that defies my current understanding of <code>__str__</code>, <code>__format__</code> and inheritance. I thought the default behavior of <code>format</code> with no format specified was to delegate to <code>str</code>, but here it sends it to <code>str</code> of the parent class instead.</p>
<p>Here is the minimal code to reproduce:</p>
<pre class="lang-py prettyprint-override"><code>from decimal import Decimal
class Money(Decimal):
CURRENCY = "โฌ"
def __new__(cls, number):
return super().__new__(cls, Decimal(number).quantize(Decimal("0.01")))
def __str__(self):
return f"{self:+}{self.CURRENCY}"
m = Money(10)
print("Test 1 - str():", str(m))
print("Test 2 - print():", m)
print(f"Test 3 - f-string: {m}")
print("Test 4 - f-string str():", f"{str(m)}")
</code></pre>
<p>This results in</p>
<pre><code>Test 1 - str(): +10.00โฌ
Test 2 - print(): +10.00โฌ
Test 3 - f-string: 10.00 # only here is my custom str not called
Test 4 - f-string str(): +10.00โฌ
</code></pre>
<p>Python version is 3.13.7.</p>
|
<python><f-string>
|
2025-09-13 23:45:32
| 2
| 884
|
Antoine Gallix
|
79,763,862
| 428,542
|
How to use type hints in a dict with TypeVar and a function that depends on that TypeVar?
|
<p>I have a relative simple app in which I use a Message Bus to pass messages.</p>
<p>I prefer a bit more stricter type checking: rather than passing arbitrary text strings as topic, I pass message objects, and to be exact: I pass instances of a subclass of the BaseMessage class, and I want to ensure that the handler is designed to take that specific subclass as parameter.</p>
<p>Here is the working code:</p>
<pre><code>from dataclasses import dataclass, field, replace
from typing import Type, TypeVar, Callable
class BaseMessage:
# Abstract class. All messages must be a subclass of BaseMessage
pass
# Message represent any subclass of BaseMessage. Used for type hints
Message = TypeVar('Message', bound=BaseMessage)
# Define the different messages that can be passed around:
@dataclass
class MessageType1(BaseMessage):
data: int
@dataclass
class MessageType2(BaseMessage):
point: tuple[float, float]
class MessageBus():
def __init__(self):
self.subscribers = {}
def subscribe(self, messagetype: Type[Message], handler: Callable[[Message], None]) -> None:
"""Register a handler (subscriber) for a topic."""
self.subscribers.setdefault(messagetype, []).append(handler)
def publish(self, message: BaseMessage):
"""Send a message to all handlers of the topic."""
for handler in self.subscribers.get(type(message), []):
handler(message)
# Example usage
def my_handler1(message: MessageType1):
print(f"my_handler1 message: {message}")
def my_handler2(message: MessageType2):
print(f"my_handler2 message: {message}")
bus = MessageBus()
bus.subscribe(MessageType1, my_handler1)
bus.subscribe(MessageType2, my_handler2)
bus.subscribe(MessageType1, my_handler2) # I want to see an error here somewhere!
bus.publish(MessageType1(3))
bus.publish(MessageType2((5.0, 2.5)))
</code></pre>
<p>As you can see, I register a few handlers <code>my_handler1</code> and <code>my_handler2</code>, and then register them with the message bus, telling upon what message type what function should be called (with that message as parameter).</p>
<p>For illustration purpose, this code contains a bug: <code>bus.subscribe(MessageType1, my_handler2)</code>. This is wrong, as <code>my_handler2</code> takes a <code>MessageType2</code> instance instead of a <code>MessageType1</code> instance.</p>
<p>The good news is that in the above code, pyright alerts me about this issue:</p>
<pre class="lang-none prettyprint-override"><code>error: Argument of type "(message: MessageType2) -> None" cannot be assigned to parameter "handler" of type "(Message@subscribe) -> None" in function "subscribe"
ย ย Type "(message: MessageType2) -> None" is not assignable to type "(MessageType1) -> None"
ย ย ย ย Parameter 1: type "MessageType1" is incompatible with type "MessageType2"
ย ย ย ย ย ย "MessageType1" is not assignable to "MessageType2" (reportArgumentType)
</code></pre>
<p>So everything works.</p>
<p>The only thing what I still like to do, is to type-hint <code>MessageBus.subscribers</code> in this snippet:</p>
<pre><code>class MessageBus():
def __init__(self):
self.subscribers = {}
</code></pre>
<p>I kind of assumed that this would be either</p>
<pre><code>self.subscribers: dict[Type[Message], list[Callable[[Message], None]]] = {}
</code></pre>
<p>or</p>
<pre><code>self.subscribers: dict[Type[BaseMessage], list[Callable[[BaseMessage], None]]] = {}
</code></pre>
<p>However, with the first, pyright gives an error <code>Type Variable "Message" has no meaning in this context</code>. And with the second line, pyright gives an error during during the <code>append()</code> call: <code>Type "(Message@subscribe) โ> None" is not assignable to type "(BaseMessage) โ> None"</code></p>
<p>So my questions:</p>
<ol>
<li>What is the correct type hint for <code>self.subscribers</code> in the above code? (and why?)</li>
<li>Would it be possible to define a type hinting shortcut for the function call, to enhance readability? E.g. <code>MessageHandler = Callable[[Message], None]</code></li>
<li>Alternatively, is there a recommended solution with a subscriber-publish model that passes Message Objects? I prefer solutions that use the Python standard library instead of external packages.</li>
</ol>
|
<python><python-typing><type-variables>
|
2025-09-13 17:35:45
| 1
| 3,568
|
MacFreek
|
79,763,516
| 259,543
|
Type invariance in generic class when callable is involved
|
<p>Why, in the following example, the type variable <code>T</code> in class <code>E</code> is invariant?</p>
<pre class="lang-py prettyprint-override"><code>class C:
pass
class D(C):
pass
class E[T: C = C]:
def g(self, f: Callable[[], T]) -> T:
return f()
x: E = E[D]() # error
</code></pre>
<p>Callable return types are covariant. This typechecks:</p>
<pre class="lang-py prettyprint-override"><code>x = E() # x is E[C]
x.g(D) # f is Callable[[], D] which fits on Callable[[], C]
</code></pre>
<p>Removing or otherwise changing the callable to not have <code>T</code> as a return type makes <code>T</code> covariant again:</p>
<pre class="lang-py prettyprint-override"><code>class E[T: C = C]:
def g(self, f: Callable[[], None]) -> T:
return cast(T, f()) # anything
x: E = E[D]() # OK
</code></pre>
<p>Hence, considering the first example, what is the rationale for <code>E[D]</code> not be compatible with <code>E[C]</code>?</p>
|
<python><python-typing>
|
2025-09-13 05:13:03
| 2
| 5,252
|
alecov
|
79,763,515
| 1,719,931
|
How to create a SQL View in SQLModel?
|
<p>How can I create a view in <a href="https://github.com/fastapi/sqlmodel" rel="nofollow noreferrer">SQLModel</a>?</p>
<p>For instance, starting from this example taken from <a href="https://sqlmodel.tiangolo.com/tutorial/connect/create-connected-rows/#refresh-and-print-heroes" rel="nofollow noreferrer">the user guide</a>, which creates a table for Hero and a table for Team, where a Team can contain multiple Heros:</p>
<pre class="lang-py prettyprint-override"><code>from sqlmodel import Field, Session, SQLModel, create_engine
class Team(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str = Field(index=True)
headquarters: str
class Hero(SQLModel, table=True):
id: int | None = Field(default=None, primary_key=True)
name: str = Field(index=True)
secret_name: str
age: int | None = Field(default=None, index=True)
team_id: int | None = Field(default=None, foreign_key="team.id")
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def create_heroes():
with Session(engine) as session:
team_preventers = Team(name="Preventers", headquarters="Sharp Tower")
team_z_force = Team(name="Z-Force", headquarters="Sister Margaret's Bar")
session.add(team_preventers)
session.add(team_z_force)
session.commit()
hero_deadpond = Hero(
name="Deadpond", secret_name="Dive Wilson", team_id=team_z_force.id
)
hero_rusty_man = Hero(
name="Rusty-Man",
secret_name="Tommy Sharp",
age=48,
team_id=team_preventers.id,
)
hero_spider_boy = Hero(name="Spider-Boy", secret_name="Pedro Parqueador")
session.add(hero_deadpond)
session.add(hero_rusty_man)
session.add(hero_spider_boy)
session.commit()
session.refresh(hero_deadpond)
session.refresh(hero_rusty_man)
session.refresh(hero_spider_boy)
print("Created hero:", hero_deadpond)
print("Created hero:", hero_rusty_man)
print("Created hero:", hero_spider_boy)
def main():
create_db_and_tables()
create_heroes()
if __name__ == "__main__":
main()
</code></pre>
<p>I would like to create a view that shows an Hero's name with his corresponding Team's name.</p>
<p>Related: <a href="https://stackoverflow.com/questions/9766940/how-to-create-an-sql-view-with-sqlalchemy">How to create an SQL View with SQLAlchemy?</a></p>
|
<python><sqlmodel>
|
2025-09-13 05:12:43
| 2
| 5,202
|
robertspierre
|
79,763,416
| 16,037,994
|
setuptools: fetch a remote cross-compiled library
|
<p>My <a href="https://github.com/cvanaret/Uno/" rel="nofollow noreferrer">C++ library</a> uses external dependencies, most of which are open source, but at least one of them is only available as <a href="https://github.com/leyffer/BQPD_jll.jl/releases" rel="nofollow noreferrer">cross-compiled binaries</a>.
In our <a href="https://github.com/cvanaret/Uno/blob/main/.github/workflows/unit-tests-linux.yml#L40" rel="nofollow noreferrer">CI workflow</a>, we typically fetch the artifact that was compiled for a particular architecture, then link it.</p>
<p>I'm now working on Python bindings, and I started writing the <a href="https://github.com/cvanaret/Uno/blob/171c2f44893ad35d650fe8ad1d699b9ff9b0bb02/setup.py" rel="nofollow noreferrer"><code>setup.py</code> file</a> that compiles the shared library using CMake. Is it possible to achieve the same thing as the Github Action, namely to fetch the proper cross-compiled artifact from the dependency's repo automatically?</p>
<p>I read about the <code>runtime_library_dirs</code> and <code>package_data</code> options, but (all) the artifacts should be available locally. I also tried to give the path to the <a href="https://github.com/leyffer/BQPD_jll.jl/releases/download/BQPD-v1.0.0%2B0/BQPD.v1.0.0.aarch64-linux-gnu-libgfortran5.tar.gz" rel="nofollow noreferrer"><code>.tar.gz</code></a> to <code>install_requires</code>, but that didn't work either (and it wouldn't pick the proper artifact automatically anyway).<br />
Hope there's a way to do it!</p>
|
<python><github><cross-compiling><setuptools>
|
2025-09-12 22:54:44
| 0
| 401
|
Charlie Vanaret - the Uno guy
|
79,763,327
| 534,674
|
How to correct 3rd party sphinx ambiguous cross-reference warnings?
|
<p>I'm trying to document a variety of classes that use scikit-learn bases <code>BaseEstimator</code> and <code>TransformerMixin</code>. Sphinx builds with a warning that,</p>
<pre class="lang-console prettyprint-override"><code>/home/jake/github/proj/pkg/__init__.py:docstring of
sklearn.utils._set_output._SetOutputMixin.set_output:6: WARNING: more than one target
found for 'any' cross-reference 'transform': could be
:py:meth:`pkg.Class1.transform` or
:py:meth:`pkg.Class2.transform` or
:py:meth:`pkg.mod.Class3.transform` or
...
</code></pre>
<p>Here, all classes subclass <code>sklearn.base.BaseEstimator</code> and <code>sklearn.base.TransformerMixin</code>, which subclasses <code>sklearn.utils._set_output._SetOutputMixin</code>, whose <code>set_output</code> docstring includes the line</p>
<pre class="lang-none prettyprint-override"><code>Configure output of `transform` and `fit_transform`
</code></pre>
<p>It appears that, because <code>_SetOutputMixin</code> is a mixin and has no <code>transform</code> method to link, this would be ambiguous. Yet <code>Class1.set_output</code> has a reference to <code>Class1</code>.transform`, etc, so its making the correct choice.</p>
<p>This is sphinx 8.1.3, not sphinx 8.2.3. I've noticed that the warning goes away in 8.2.3, but not all extensions we use are compatible with 8.2. It appears from <a href="https://github.com/sphinx-doc/sphinx/pull/9732" rel="nofollow noreferrer">this pr</a> that there is no way of suppressing the warning prior to 8.2 (and I've tried with <code>suppress_warnings</code> and <code>show_warning_types</code>).</p>
<p>Since I can't change the 3rd party docstring (not sure I'd even know what to do in this case), is there a way to correct my package's docstring/inheritance to get rid of this error?</p>
|
<python><scikit-learn><python-sphinx>
|
2025-09-12 20:33:03
| 0
| 1,806
|
Jake Stevens-Haas
|
79,763,247
| 6,439,229
|
How to make QTableView's drag selection behave like a QTreeView
|
<p>There's a difference in behaviour between <code>QTableView</code> and <code>QTreeView</code> when it comes to selecting multiple items by dragging the mouse.</p>
<p>In a <code>QTreeView</code> you can start outside the populated area and drag over the items, and they will be selected. In a <code>QTableView</code> this is not not possible. In order to drag and select items you have to start dragging on an item.</p>
<p>Is it possible to make a <code>QTableView</code> behave like a <code>QTreeView</code> in this aspect?</p>
<p>Here's a little example where you can experience the difference:</p>
<pre><code>from PyQt6.QtCore import QAbstractTableModel, Qt
from PyQt6.QtWidgets import QApplication, QWidget, QVBoxLayout, QTableView, QTreeView
class TModel(QAbstractTableModel):
def __init__(self):
super().__init__()
self.d = [[1, 2], [3, 4], [5, 6]]
def columnCount(self, parent=None):
return 2
def rowCount(self, parent=None):
return len(self.d)
def data(self, index, role=1):
if role == Qt.ItemDataRole.DisplayRole:
return self.d[index.row()][index.column()]
return None
class Window(QWidget):
def __init__(self):
super().__init__()
view1 = QTreeView()
view2 = QTableView()
for v in (view1, view2):
v.setSelectionMode(v.SelectionMode.ExtendedSelection)
model = TModel()
view1.setModel(model)
view2.setModel(model)
lay = QVBoxLayout(self)
lay.addWidget(view1)
lay.addWidget(view2)
app = QApplication([])
win = Window()
win.show()
app.exec()
</code></pre>
|
<python><pyqt><qtableview><pyqt6><qtreeview>
|
2025-09-12 18:17:42
| 0
| 1,016
|
mahkitah
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.