QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,806,769
| 2,813,606
|
How to extract table from PDF with boxes into pandas dataframe
|
<p>I am trying to solve 3 problems:</p>
<ol>
<li>Detect a table in my PDF that appears after a specific section.</li>
<li>Parse out the information in the table into a pandas dataframe.</li>
<li>Mark if a box is checked (or not blank) next to the information parsed from the table.</li>
</ol>
<p>Here is a link to the <a href="https://docmadeeasy.com/file?sent=6741&h=R4JKdDqHcOL2dNL1#FmjNvJuCsJyxEXoMvWkjpy2gkorZfzDATFWwDtSARDh" rel="nofollow noreferrer">PDF</a></p>
<p>Here is my code so far which has been mostly able to get the table but can't quite seem to identify if a box is marked or not.</p>
<pre><code>import pandas as pd
import re
import fitz
from math import sqrt
from io import BytesIO
PDF_FILE_NAME = "path/test_doc.pdf"
SECTION_HEADER = "Section 3: Table"
# --- Helper Functions (Re-using the reliable text extraction) ---
def clean_item_text(text):
"""Removes leading symbols and cleans up whitespace."""
if pd.isna(text) or text == "":
return ""
# Pattern to find known symbols: ☑, ☐, □, ■, X, x, ✓, followed by optional space
cleaned = re.sub(r"[\u2611\u2610\u25A1\u25A0Xx\u2713]\s*", "", str(text).strip())
return cleaned.strip()
def extract_table_text(pdf_path, section_header):
"""
Extracts the table data, but cleans the item text to get only the name.
"""
with fitz.open(pdf_path) as doc:
text_pages = [page.get_text("text") for page in doc]
full_text = "".join(text_pages)
full_text = full_text.replace("Sec$on", "Section")
section_match = re.search(rf"{re.escape(section_header)}", full_text, re.IGNORECASE)
if not section_match:
raise ValueError(f"Section '{section_header}' not found.")
section_start = section_match.end()
text_after_section = full_text[section_start:].strip()
table_text = re.split(r"Section\s*\d+\s*:", text_after_section, maxsplit=1)[0]
lines = [l.strip() for l in table_text.split("\n") if l.strip()]
if len(lines) < 6:
raise ValueError("Insufficient lines found for table structure.")
headers = [l.strip('"').strip() for l in lines[2:5]]
items_raw = lines[5:]
# Define column splits based on the provided data structure
col1_raw, col2_raw, col3_raw = items_raw[0:3], items_raw[3:9], items_raw[9:15]
# Process raw lists to get cleaned text for the DF
col1 = [clean_item_text(x) for x in col1_raw]
col2 = [clean_item_text(x) for x in col2_raw]
col3 = [clean_item_text(x) for x in col3_raw]
maxlen = max(len(col1), len(col2), len(col3))
for c in (col1, col2, col3):
while len(c) < maxlen:
c.append("")
df = pd.DataFrame({
headers[0]: col1,
headers[1]: col2,
headers[2]: col3
})
# Return both the DataFrame and the list of headers
return df, headers
# --- OCR/Image Analysis Logic ---
def scan_checkbox_roi(pdf_path, df, all_headers):
"""
Scans an image region (ROI) to the left of each item name to detect a mark.
"""
mapping = {}
# Flatten all items in the DataFrame to a list of unique names (and filter blanks)
all_items = [item for col in all_headers for item in df[col].dropna().tolist() if item != ""]
all_items = list(set(all_items))
print("="*60)
print("IMAGE SCAN (OCR) ATTEMPT")
print("="*60)
with fitz.open(pdf_path) as doc:
for page_num, page in enumerate(doc, 1):
# Find coordinates of all relevant items on the page
words = page.get_text("words")
# Map item name to its bounding box (bbox)
item_coords = {}
for word in words:
text = clean_item_text(word[4])
if text in all_items and text not in item_coords:
item_coords[text] = word[:4] # (x0, y0, x1, y1)
# Process each found item
for item_text, item_bbox in item_coords.items():
# Define ROI: A small rectangle to the left of the item text.
# x0 = item_bbox[0] - 25, y0 = item_bbox[1] - 5
# x1 = item_bbox[0] - 5, y1 = item_bbox[3] + 5
roi_rect = fitz.Rect(item_bbox[0] - 25, item_bbox[1] - 5,
item_bbox[0] - 5, item_bbox[3] + 5)
if not roi_rect.is_empty:
# 1. Render the ROI to a Pixmap (Image) at high resolution
matrix = fitz.Matrix(3, 3)
pix = page.get_pixmap(matrix=matrix, clip=roi_rect)
# 2. Analyze Pixels for a Mark
dark_pixel_threshold = 0.9 # 90% white threshold
dark_pixel_count = 0
total_pixels = pix.width * pix.height
for i in range(0, len(pix.samples), pix.n):
r, g, b = pix.samples[i:i+3]
# Convert RGB to grayscale (luminance)
luminance = (0.2126 * r + 0.7152 * g + 0.0722 * b) / 255.0
if luminance < dark_pixel_threshold:
dark_pixel_count += 1
# 3. Determine Status
mark_ratio = dark_pixel_count / total_pixels
if mark_ratio > 0.05: # If more than 5% of pixels are dark (mark detected)
status = "checked"
else:
status = "unchecked"
mapping[item_text] = status
print(f" ✓ '{item_text}' (Ratio: {mark_ratio:.3f}) -> {status}")
else:
mapping[item_text] = ""
print(f" ✗ '{item_text}' - Invalid ROI")
return mapping
# --- Main Logic ---
def parse_pdf_for_table_with_checkboxes(pdf_file_path, section_header):
# 1. Extract the clean item names and original headers
df, original_data_cols = extract_table_text(pdf_file_path, section_header)
# 2. Use the item names to guide the image scanning for status
checkbox_map = scan_checkbox_roi(pdf_file_path, df, original_data_cols)
# 3. Apply status to dataframe (FIXED LOGIC)
# Ensure we only iterate over the original columns before adding new ones
for col in original_data_cols:
status_col = f"{col} Status"
def get_status(x):
if pd.isna(x) or x == "":
return ""
val = str(x).strip()
return checkbox_map.get(val, "")
df[status_col] = df[col].map(get_status)
# Re-order columns using the clean, original column list
new_cols = []
for h in original_data_cols:
new_cols.append(h)
new_cols.append(f"{h} Status")
return df[new_cols]
# Run
result = parse_pdf_for_table_with_checkboxes(PDF_FILE_NAME, SECTION_HEADER)
</code></pre>
<p>The final dataframe should look like this:</p>
<pre><code>Col1 Col1_Status Col2 Col2_Status Col3 Col3_Status
Item1 Checked Item4 Checked Item10 Checked
Item2 Item5 Item11
Item3 Item6 Item12
Item7 Checked Item13 Checked
Item8 Item14
Item9 Item15 Checked
</code></pre>
<p>But the columns are a little misaligned and none of the Xs in the boxes are being detected.</p>
<p>How do I solve this problem?</p>
|
<python><pandas><parsing><pdf><ocr>
|
2025-11-01 19:08:18
| 1
| 921
|
user2813606
|
79,806,767
| 3,577,105
|
Python: Getting the list of arguments passed into the current function, while allowing keyword argument default values
|
<p>I'm writing a function whose inner logic needs to know which optional keyword arguments the function was called with.</p>
<p>I also need to be able to specify default values for keyword arguments.</p>
<p>If an argument was specified in the call, it will behave one way; if that argument was not specified in the call but was instead taken from the default, it will use the default and will behave a different way.</p>
<p>I do have a working solution, but am wondering if there is a cleaner, safer, more concise way to accomplish the goal:</p>
<pre><code>import json
def addFeature(*args,**kwargs):
defaults={
'p1':1,
'p2':2,
'p3':3
}
mergedKwargs = defaults | kwargs # start with defaults; overwrite with any specified kwargs items
print(f'args:{args}')
print('keyword arguments specified in the call to this function:')
print(json.dumps(kwargs,indent=3))
print('mergedKwargs:')
print(json.dumps(mergedKwargs,indent=3))
# unpack to local variables
(p1,p2,p3)=(mergedKwargs[k] for k in ('p1','p2','p3'))
print(f'local values: p1={p1} p2={p2} p3={p3}')
addFeature(1,2,p1=5)
addFeature(3,4,p2=7,p3=9)
</code></pre>
<p>running this code:</p>
<pre><code>PS C:\Users\caver\Documents\GitHub> python argsTest.py
call to addFeature: args: (1, 2)
keyword arguments specified in the call to this function:
{
"p1": 5
}
mergedKwargs:
{
"p1": 5,
"p2": 2,
"p3": 3
}
local values: p1=5 p2=2 p3=3
call to addFeature: args: (3, 4)
keyword arguments specified in the call to this function:
{
"p2": 7,
"p3": 9
}
mergedKwargs:
{
"p1": 1,
"p2": 7,
"p3": 9
}
local values: p1=1 p2=7 p3=9
PS C:\Users\caver\Documents\GitHub>
</code></pre>
<p>One drawback of this method is that the function signature (list of possible arguments) is unknown when using *args and **kwargs.</p>
|
<python><arguments><default>
|
2025-11-01 19:02:54
| 0
| 904
|
Tom Grundy
|
79,806,536
| 4,108,542
|
"RuntimeError: PdhAddEnglishCounterW failed" in python psutil module
|
<p>I try to get free memory and swap amounts on Windows 7 64bit machine with psutil python module. Virtual memory info works normally, but getting swap information failed:</p>
<pre><code>Python 3.8.6 (tags/v3.8.6:db45529, Sep 23 2020, 15:52:53) [MSC v.1927 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license()" for more information.
>>> import psutil
>>> psutil.virtual_memory()
svmem(total=8379674624, available=3433517056, percent=59.0, used=4946157568, free=3433517056)
>>> psutil.swap_memory()
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
psutil.swap_memory()
File "C:\Users\Artem\AppData\Local\Programs\Python\Python38\lib\site-packages\psutil\__init__.py", line 2051, in swap_memory
return _psplatform.swap_memory()
File "C:\Users\Artem\AppData\Local\Programs\Python\Python38\lib\site-packages\psutil\_pswindows.py", line 243, in swap_memory
percentswap = cext.swap_percent()
RuntimeError: PdhAddEnglishCounterW failed. Performance counters may be disabled.
>>>
</code></pre>
<p>Module version:</p>
<pre><code>C:\Users\Artem\Documents>pip list
Package Version
---------- -------
pip 25.0.1
psutil 7.1.2
setuptools 49.2.1
</code></pre>
|
<python><python-3.x><windows><windows-7><psutil>
|
2025-11-01 13:03:34
| 1
| 569
|
Artem
|
79,806,527
| 9,072,753
|
how to check if pid has rolled over?
|
<p>I am writing a program that will periodically scan all processes on a machine. I want to cache as much as possible to avoid any system load.</p>
<p>I can list all available pids on a machine with <code>ls /proc/[0-9]*</code> and then iterate over the list. What do I use as the key of the cache to protect against pid number re-use?</p>
<p>In other words, consider the following python code, that opens two pid directories in some different points in time. What is needed to compare the directories to check if they refer to the same process?</p>
<pre><code>pid_dir_fd1 = os.open("/proc/123", os.O_RDONLY)
time.sleep(1)
pid_dir_fd2 = os.open("/proc/123", os.O_RDONLY)
print(are_pid_dirs_equal(pid_dir_fd1, pid_dir_fd2))
</code></pre>
<p>I can't use <code>pidfd</code> as I want to access <code>/proc/<pid>/{cmdline,exe,stat,status}</code> files to collect information about processes. Or I can? I do not know how is it possible to connect <code>pidfd</code> with the <code>/proc/<pid>/files</code>.</p>
<p>My current idea is to compare the starttime field of stat file of processe:</p>
<pre><code>def read_text(file: str, dir_fd: int):
with os.open(file, dir_fd=dir_fd) as f:
return f.read()
def stat_field(statstr: str, idx: int):
return int(statstr.split(")")[-1].split()[idx - 3])
pid_dir_fd1 = os.open("/proc/123", os.O_RDONLY)
starttime1 = stat_field(read_text("stat", pid_dir_fd1), 22)
time.sleep(1)
pid_dir_fd2 = os.open("/proc/123", os.O_RDONLY)
starttime2 = stat_field(read_text("stat", pid_dir_fd2), 22)
if create_time1 == create_time2:
print("both directories are equal")
else:
print("pid_dir_fd1 died in the meantime")
</code></pre>
<p>Is there something better?</p>
|
<python><linux><pid>
|
2025-11-01 12:33:57
| 2
| 145,478
|
KamilCuk
|
79,806,459
| 4,565,376
|
How to get sum of tuples using built-in function?
|
<p>In Python 3 built-in function <code>sum(iterable, /, start=0)</code> allows to get the total from <code>iterable</code>.
<code>start</code> value is not allowed to be a string.</p>
<p>Ok, so we can call <code>sum</code> for lists (concatenation)</p>
<pre class="lang-py prettyprint-override"><code>sum([[1,2,3],[4,5],[6]], start=[])
</code></pre>
<blockquote>
<p><code>[1, 2, 3, 4, 5, 6]</code></p>
</blockquote>
<p>because</p>
<pre class="lang-py prettyprint-override"><code>[1,2,3] + [4,5] + [6] == [1, 2, 3, 4, 5, 6]
</code></pre>
<p>By the same logic</p>
<pre class="lang-py prettyprint-override"><code>(1,2,3) + (4,5) + (6,) == (1, 2, 3, 4, 5, 6)
</code></pre>
<p>I expect to be able to call <code>sum</code> for tuples with corresponding result. But</p>
<pre class="lang-py prettyprint-override"><code>sum([(1,2,3),(4,5),(6)], start=())
</code></pre>
<blockquote>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "<pyshell#30>", line 1, in <module>
sum([(1,2,3),(4,5),(6)], start=())
TypeError: can only concatenate tuple (not "int") to tuple
</code></pre>
</blockquote>
<p>I have bin read documentation and I did not found restrictions about usage of tuples.</p>
|
<python><python-3.x><list><sum><tuples>
|
2025-11-01 10:10:06
| 1
| 337
|
leofun01
|
79,806,406
| 13,392,257
|
How to handle aiokafka getmany() exceptions in case of stopped kafka
|
<p>I am runnging aiokafka consumer in fastAPI app
In the main file I am starting the consumer</p>
<pre><code>from aiokafka import AIOKafkaConsumer
def create_application() -> FastAPI:
...
return FastAPI()
app = create_application()
consumer = create_consumer()
@app.on_event("startup")
async def startup_event():
"""Start up event for FastAPI application."""
global_items["logger"].info("Starting up...")
await consumer.start()
asyncio.gather(
consume(),
# some other tasks
return_exceptions=True
)
async def consume(db: Session = next(get_db())):
"""Consume and process messages from Kafka."""
while True:
try:
print("Try consume")
data = await consumer.getmany(timeout_ms=10000)
for tp, msgs in data.items():
if msgs:
for msg in msgs:
await process_message(msg, db)
await consumer.commit({tp: msgs[-1].offset+1})
except Exception as e:
# printing ERROR LOG
finally:
await asyncio.sleep(settings.common.consumer_pause_sec)
</code></pre>
<p>I want to print ERROR if kafka was stopped</p>
<p>My actions</p>
<ol>
<li>Turn on Kafka and fastAPI application - see valid logs</li>
<li>Trun off Kafka (via docker stop </li>
</ol>
<p>Current result after the actions - I see logs "Try consume"
Expected result - I want to see Error log (in the exception handler)</p>
|
<python><apache-kafka><aiokafka>
|
2025-11-01 08:14:29
| 1
| 1,708
|
mascai
|
79,806,248
| 6,822,178
|
Advanced combinatorics in Python: binom(n,2) subsets of binom(n,3) combinations without repetition
|
<p>I have a list <code>d_n</code> of <code>n</code> integers and the results <code>Z_klm</code> of a function <code>fun(dk, dl, dm)</code> for all <code>binom(n, 3)</code> combinations without repetition <code>(k, l, m)</code> out of <code>d_n</code> indices.</p>
<p>Now, for all <code>binom(n, 2)</code> combinations without repetitions <code>(s, t)</code> of <code>d_n</code> indices, I need to take the <code>T_st</code> partial sums of <code>Z_klm</code> where <code>(s, t)</code> is a subset of <code>(k, l, m)</code>.</p>
<p>Here's an example with a small <code>n</code>.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from itertools import combinations
from scipy.special import binom
# generate random data
n = 5
d_n = np.random.randint(-30, 30, 5)
# define the function
def fun(dk, dl, dm):
return np.sign(dk - 2*dl + dm)
# calculate Z_klm and store (k,l,m) indices
klm = []
Z_klm = []
for k, l, m in combinations(range(n), 3):
Z_klm.append(fun(d_n[k], d_n[l], d_n[m]))
klm.append([k, l, m])
# calculate the partial sums T_st
st = {}
T_st = np.zeros(shape=int(binom(n, 2)))
for h, (s, t) in enumerate(combinations(range(n), 2)):
st.update({f"({s},{t})": []})
for i, _klm_ in enumerate(klm):
if s in _klm_ and t in _klm_:
T_st[h] += Z_klm[i]
st[f"({s},{t})"].append(_klm_)
T_st, st
</code></pre>
<pre><code>(array([ 1., 1., 2., 2., 1., 1., 1., 1., 1., -2.]),
{'(0,1)': [[0, 1, 2], [0, 1, 3], [0, 1, 4]],
'(0,2)': [[0, 1, 2], [0, 2, 3], [0, 2, 4]],
'(0,3)': [[0, 1, 3], [0, 2, 3], [0, 3, 4]],
'(0,4)': [[0, 1, 4], [0, 2, 4], [0, 3, 4]],
'(1,2)': [[0, 1, 2], [1, 2, 3], [1, 2, 4]],
'(1,3)': [[0, 1, 3], [1, 2, 3], [1, 3, 4]],
'(1,4)': [[0, 1, 4], [1, 2, 4], [1, 3, 4]],
'(2,3)': [[0, 2, 3], [1, 2, 3], [2, 3, 4]],
'(2,4)': [[0, 2, 4], [1, 2, 4], [2, 3, 4]],
'(3,4)': [[0, 3, 4], [1, 3, 4], [2, 3, 4]]})
</code></pre>
<p>For example <code>T_{2,4}</code> is the sum of <code>Z_{0,2,4}</code>, <code>Z_{1,2,4}</code>, and <code>Z_{2,3,4}</code>.</p>
<p>My <em>rough</em> implementation is working only because there are very few observations here. But in case of real sample sizes (can usually be up to <code>n = 1000</code>), it would take a lifetime to iterate all over the <code>binom(n,2)</code> and the <code>binom(n,3)</code>.</p>
<p>Any suggestions to speed it up with an efficient algorithm or more advanced iteration tools?</p>
<p>P.S.: I do not need to store all (k, l, m) and (s, t) indices; I only did it here to demonstrate how it works and to implement the algorithm for calculating the <code>T_st</code> partial sums.</p>
|
<python><combinations><python-itertools><combinatorics>
|
2025-10-31 22:29:08
| 1
| 2,289
|
Max Pierini
|
79,806,145
| 17,472,988
|
Python gets stuck in an infinite loop restarting multiprocessing pool workers on error in initilization routine
|
<p>I am trying to setup a multiprocessing Python task on Windows 10, Python 3.13. I have "main.py" module, containing the main entry, "orchestration.py" module with worker initilization and task routines, as well as some other modules not relevanet to present issue. An MRE is shown below.</p>
<p>If I have an exception within <code>main_worker()</code> (active in MRE), it is properly propagated to <code>main.py</code> try-except and can be handled in the main process.</p>
<p>However, if I have an exception in <code>worker_init()</code> (by uncommenting the raise and commenting out the <code>raise</code> in <code>main_worker()</code>), Python attempts to restart the process without exception propagation. In case of a persistent error, Python gets stuck in an infinite loop of process restrating.</p>
<p>How can I properly terminate the whole thing in such a case?</p>
<p><strong>main.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import logging
import multiprocessing as mp
from multiprocessing import Pool
if __package__ is None or __package__ == "":
sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
from orchestration import worker_init, main_worker
else:
from .orchestration import worker_init, main_worker
# ---------------------------------------------------------------------------
# Main driver
# ---------------------------------------------------------------------------
def main():
logging.getLogger().warning(f"Running MainProcess PID-{os.getpid()}")
try:
with Pool(processes=2, initializer=worker_init) as pool:
for i, result in enumerate(pool.imap_unordered(main_worker, range(10), chunksize=10)):
if result:
data, err = result
logging.getLogger().warning(f"Worker job completed. Returned data: {data}.")
else:
logging.getLogger().warning(f"Worker job did not return any result.")
err = mp.ProcessError(f"DUMMY error {err}")
if err:
logging.getLogger().error(f"Worker job returned and error: {err}.")
pool.terminate()
break
except Exception as e:
logging.getLogger().error(f"Pool error: {e}.")
if __name__ == "__main__":
main()
</code></pre>
<p><strong>orchestration.py</strong></p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
import logging
import multiprocessing as mp
def worker_init(worker_config=None):
"""Initializer for multiprocessing.Pool workers (per process)."""
logging.getLogger().warning(f"Running worker_init PID-{os.getpid()}")
# raise mp.ProcessError(f"DUMMY PID-{os.getpid()}")
return
def main_worker(task_data=None):
"""Execute one job and return (path, meta, error)."""
raise mp.ProcessError(f"DUMMY PID-{os.getpid()}")
try:
logging.getLogger().warning(f"Running main_worker PID-{os.getpid()}")
data = os.getpid()
except Exception as e:
return None, e
return data, None
</code></pre>
|
<python><windows><multithreading><multiprocessing>
|
2025-10-31 19:34:02
| 1
| 1,859
|
PChemGuy
|
79,805,870
| 2,123,706
|
Python was not found error when running python script from cmd prompt
|
<p>I can run a python script line by line successfully in VSCode</p>
<p><a href="https://i.sstatic.net/BHcx8a9z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHcx8a9z.png" alt="enter image description here" /></a></p>
<p>But when I try to run it from prompt using <code>python .\test.py</code> I receive the error:</p>
<blockquote>
<p>Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Apps > Advanced app settings > App execution aliases.</p>
</blockquote>
<p>I have tried the following:</p>
<ul>
<li>uninstalling and reinstalling python</li>
<li>updating the PATH variable in both local and system settings</li>
<li>updated the python interpreter to the new python</li>
<li>installed python from the MSFT store</li>
</ul>
<p>what else can I try to make this work again?</p>
|
<python><windows>
|
2025-10-31 14:09:59
| 1
| 3,810
|
frank
|
79,805,820
| 7,699,611
|
testcontainers initializing dockerclient twice - the second time on container.start
|
<p>I am trying to create a container with oracle db to run tests. Due to some restrictions, I have to use rootless podman instead of docker. Here is how I do it:</p>
<pre><code>def _container_env_kwargs() -> dict:
# Try Podman rootless unix socket
rootless = f"/var/run/user/{os.getuid()}/podman/podman.sock"
if os.path.exists(rootless):
try:
s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
s.settimeout(1.0)
s.connect(rootless); s.close()
env["DOCKER_HOST"] = f"unix://{rootless}"
env.setdefault("DOCKER_API_VERSION", "1.41")
env.setdefault("TESTCONTAINERS_RYUK_DISABLED", "true")
return {"environment": env}
except Exception:
raise RuntimeError("Failed to create env for container client")
@pytest.fixture(scope="session")
def oracle_container() -> Generator[dict, None, None]:
"""
Start an Oracle XE container using Podman.
Wait until it is ready to accept SQL connections.
"""
dk_kwargs = _container_env_kwargs()
container = (
DockerContainer(ORACLE_IMAGE, docker_client_kw=dk_kwargs)
.with_env("ORACLE_PASSWORD", ORACLE_PASSWORD)
.with_env("ORACLE_DATABASE", ORACLE_SERVICE)
.with_exposed_ports("1521/tcp")
)
container.start()
</code></pre>
<p>When I try to run the tests, I get this stack trace:</p>
<pre><code>The above exception was the direct cause of the following exception:
@pytest.fixture(scope="session")
def oracle_container() -> Generator[dict, None, None]:
"""
Start an Oracle XE container using Docker or Podman.
Wait until it is ready to accept SQL connections.
"""
dk_kwargs = _container_env_kwargs() # <- returns {"environment": {...}}
container = (
DockerContainer(ORACLE_IMAGE, docker_client_kw=dk_kwargs)
.with_env("ORACLE_PASSWORD", ORACLE_PASSWORD)
.with_env("ORACLE_DATABASE", ORACLE_SERVICE)
.with_exposed_ports("1521/tcp")
)
> container.start()
conftest.py:125:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../miniconda3/envs/scoring-service-v-2/lib/python3.12/site-packages/testcontainers/core/container.py:176: in start
Reaper.get_instance()
../../miniconda3/envs/scoring-service-v-2/lib/python3.12/site-packages/testcontainers/core/container.py:320: in get_instance
Reaper._instance = Reaper._create_instance()
^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/scoring-service-v-2/lib/python3.12/site-packages/testcontainers/core/container.py:343: in _create_instance
DockerContainer(c.ryuk_image)
../../miniconda3/envs/scoring-service-v-2/lib/python3.12/site-packages/testcontainers/core/container.py:85: in __init__
self._docker = DockerClient(**(docker_client_kw or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/scoring-service-v-2/lib/python3.12/site-packages/testcontainers/core/docker_client.py:73: in __init__
self.client = docker.from_env(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/scoring-service-v-2/lib/python3.12/site-packages/docker/client.py:94: in from_env
return cls(
../../miniconda3/envs/scoring-service-v-2/lib/python3.12/site-packages/docker/client.py:45: in __init__
self.api = APIClient(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
../../miniconda3/envs/scoring-service-v-2/lib/python3.12/site-packages/docker/api/client.py:207: in __init__
self._version = self._retrieve_server_version()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <docker.api.client.APIClient object at 0x7f6a5593cc20>
def _retrieve_server_version(self):
try:
return self.version(api_version=False)["ApiVersion"]
except KeyError as ke:
raise DockerException(
'Invalid response from docker daemon: key "ApiVersion"'
' is missing.'
) from ke
except Exception as e:
> raise DockerException(
f'Error while fetching server API version: {e}'
) from e
E docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
</code></pre>
<p>When I debug and go step buy step, I see that the <code>DockerClient</code> object is created twice. The first time when I create <code>DockerContainer</code> object. And the <code>self._version = self._retrieve_server_version()</code> runs withour errors when creating the APIClient object the first time.</p>
<p>But then, when I hit <code>container.start()</code>, my breakpoints trigger again as if new <code>DockerClient</code> object is being created, but without env that I provide and I get the exception.</p>
<p>Why does it happen?</p>
|
<python><python-3.x><docker><testcontainers><podman>
|
2025-10-31 13:02:02
| 1
| 939
|
Muslimbek Abduganiev
|
79,805,794
| 4,316,500
|
pyueye is_GetCameraList gives wrong dwCameraID
|
<p>When I open "IDS Camera Manager", I can read all serial numbers, camera type and CameraID of the connected IDS cameras.</p>
<p>I guess that the same functionality expected for the pyueye.ueye.is_GetCameraList function.</p>
<p>But when I execute it, I get the right number of cameras (I currently have 2 connected cameras), but all parameters</p>
<pre><code>struct UEYE_CAMERA_INFO {
dwCameraID [c_uint] = 2;
dwDeviceID [c_uint] = 0;
dwSensorID [c_uint] = 0;
dwInUse [c_uint] = 0;
SerNo [c_char_Array_16] = b'';
Model [c_char_Array_16] = b'';
dwStatus [c_uint] = 0;
dwReserved [c_uint_Array_2] = <pyueye.ueye.c_uint_Array_2 object at 0x0000029BE79A9ED0>;
FullModelName [c_char_Array_32] = b'';
dwReserved2 [c_uint_Array_5] = <pyueye.ueye.c_uint_Array_5 object at 0x0000029BE79A9ED0>;
};
struct UEYE_CAMERA_INFO {
dwCameraID [c_uint] = 0;
dwDeviceID [c_uint] = 0;
dwSensorID [c_uint] = 0;
dwInUse [c_uint] = 0;
SerNo [c_char_Array_16] = b'';
Model [c_char_Array_16] = b'';
dwStatus [c_uint] = 0;
dwReserved [c_uint_Array_2] = <pyueye.ueye.c_uint_Array_2 object at 0x0000029BE79A9ED0>;
FullModelName [c_char_Array_32] = b'';
dwReserved2 [c_uint_Array_5] = <pyueye.ueye.c_uint_Array_5 object at 0x0000029BE79A9ED0>;
};
</code></pre>
<p>I guess I should complain directly to IDS... but perhaps someone has a solution for me ?</p>
|
<python><python-module>
|
2025-10-31 12:42:38
| 0
| 1,011
|
2diabolos.com
|
79,805,594
| 9,112,151
|
How to resolve error Unresolved attribute reference 'create' for class 'BaseRepository'
|
<p>I have code:</p>
<pre><code>import logging
from typing import Generic, TypeVar
from typing import Self, Any, Type
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.orm import Session
logger = logging.getLogger(__name__)
T = TypeVar('T')
class BaseRepository(Generic[T]):
model_cls: Type[T]
def __init__(self, session: Session | AsyncSession) -> None:
if not self.model_cls:
raise ValueError(f"Не задана модель в атрибуте `{self.__class__.__name__}.model_cls`")
self._session = session
self._flush = False
self._commit = False
def _clone(self) -> Self:
clone = self.__class__(self._session)
clone._flush = self._flush
clone._commit = self._commit
return clone
def flush(self) -> Self:
clone = self._clone()
clone._flush = True
return clone
def commit(self) -> Self:
clone = self._clone()
clone._commit = True
return clone
class AsyncRepository(BaseRepository):
async def create(self, **kw: Any) -> T:
obj = self.model_cls(**kw)
self._session.add(obj)
await self._flush_commit(obj)
return obj
async def _flush_commit(self, *objs: T) -> None:
if self._commit and objs:
await self._session.commit()
elif self._flush:
await self._session.flush(objs)
self._flush = False
self._commit = False
class SyncRepository(BaseRepository):
def create(self, **kw: Any) -> T:
obj = self.model_cls(**kw)
self._session.add(obj)
self._flush_commit(obj)
return obj
def _flush_commit(self, *objs: T) -> None:
if self._commit and objs:
self._session.commit()
elif self._flush:
self._session.flush(objs)
self._flush = False
self._commit = False
</code></pre>
<p>When doing like this:</p>
<pre><code>from sqlalchemy import create_engine, Column, Integer, String
from sqlalchemy.orm import sessionmaker, declarative_base
Base = declarative_base()
class User(Base):
__tablename__ = 'users'
id = Column(Integer, primary_key=True)
name = Column(String)
age = Column(Integer)
engine = create_engine('sqlite:///db.db')
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
class UsersRepo(SyncRepository):
model_cls = User
with Session() as session:
repo = UsersRepo(session)
repo.commit().create(name="asd", age=10)
</code></pre>
<p>At line <code>repo.commit().create(name="asd", age=10)</code> PyCharm gives me a warning:</p>
<p><a href="https://i.sstatic.net/bZGqi1KU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZGqi1KU.png" alt="enter image description here" /></a></p>
<p>How to fix this warning?</p>
|
<python><pycharm><python-typing>
|
2025-10-31 09:34:45
| 0
| 1,019
|
Альберт Александров
|
79,805,579
| 1,719,931
|
Extending polars DataFrame while maintaining variables between calls
|
<p>I would like to code a logger for polars using the <a href="https://docs.pola.rs/api/python/stable/reference/api.html" rel="nofollow noreferrer">Custom Namespace API</a>.</p>
<p>For instance, starting from:</p>
<pre class="lang-py prettyprint-override"><code>import logging
import polars as pl
penguins_pl = pl.read_csv("https://raw.githubusercontent.com/mwaskom/seaborn-data/master/penguins.csv")
</code></pre>
<p>my objective is to have</p>
<pre class="lang-py prettyprint-override"><code>penguins_pl.mine.startlog().filter(pl.col("species")=="Adelie").mine.endlog()
</code></pre>
<p>log "192 rows has been removed".</p>
<p>The plan is to have <code>startlog</code> save the shape of the dataframe in a temporary variable and then reuse that in <code>endlog</code>.</p>
<p>I have tried this:</p>
<pre class="lang-py prettyprint-override"><code>@pl.api.register_dataframe_namespace("mine")
class MinePolarsDataframeUtils:
def __init__(self, df: pl.DataFrame):
self._df = df
def startlog(self):
self._shape = self._df.shape
return(self._df)
def endlog(self):
if not self._shape:
raise ValueError("startlog() must be called before endlog()")
dr = self._shape[0] - self._df.shape[0]
dc = self._shape[1] - self._df.shape[1]
logging.getLogger("polars_logger").info(f"Rows added {dr}, cols added {dc}")
self._shape = None
return(self._df)
</code></pre>
<p>But it doesn't work because <code>MinePolarsDataframeUtils</code> is initialized both when <code>startlog</code> and when <code>endlog</code> are called.</p>
<p>That is, when <code>endlog</code> is called, the class starts from scratch, and the value of <code>self._shape</code> saved by <code>startlog</code> is not carried over.</p>
<p>That is, when <code>endlog</code> is called, <code>self._shape</code> is undefined.</p>
<p>How can I keep custom variables between calls when extending polars?</p>
<p>Related: <a href="https://stackoverflow.com/questions/79536363/logging-operation-results-in-pandas-equivalent-of-stata-tidylog">Logging operation results in pandas (equivalent of STATA/tidylog)</a></p>
<p>Related: <a href="https://stackoverflow.com/a/71729343/1719931">https://stackoverflow.com/a/71729343/1719931</a></p>
|
<python><python-polars>
|
2025-10-31 09:19:13
| 2
| 5,202
|
robertspierre
|
79,805,422
| 6,048,158
|
Why are there no records read from spanner change stream?
|
<p>I'm trying to write a Python GCP dataflow that processes records from a Spanner change stream and prints them out. I am running it locally and it appears to work but prints no records when I update a record in the database.</p>
<p>My setup:</p>
<ul>
<li>created a Spanner database, a table called <strong>Patients</strong>, and a change stream called <strong>PatientsChangeStream</strong>. The change stream looks like this:</li>
</ul>
<pre><code>CREATE CHANGE STREAM PatientsChangeStream
FOR Patients
OPTIONS (
value_capture_type = 'NEW_ROW',
retention_period = '7d',
exclude_insert = false,
exclude_update = false,
exclude_delete = false
);
</code></pre>
<p>Here is the python code. It seems to connect to the database, bind to the change stream, but won't print any records after I insert or update records in the <strong>Patients</strong> table.</p>
<pre><code>import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions, GoogleCloudOptions, StandardOptions
from apache_beam.io.gcp.spanner import ReadChangeStreamFromSpanner
from datetime import datetime, timedelta, timezone
from google.oauth2 import service_account
class PrintChangeRecord(beam.DoFn):
def __init__(self):
self.count = 0
def process(self, element):
self.count += 1
print(f"Change Record #{self.count}: {element}")
yield element
def run_pipeline(project_id, instance_id, database_id, change_stream_name, metadata_database_id, credentials_path):
# Configure pipeline options with Google Cloud settings
options = PipelineOptions()
google_cloud_options = options.view_as(GoogleCloudOptions)
google_cloud_options.project = project_id
google_cloud_options.region = 'us-central1'
# Use service account key file
credentials = service_account.Credentials.from_service_account_file(
credentials_path,
scopes=['https://www.googleapis.com/auth/cloud-platform']
)
google_cloud_options.service_account_email = credentials.service_account_email
# Set environment variable for the pipeline to run locally
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
print(f"Using service account: {credentials.service_account_email}")
standard_options = options.view_as(StandardOptions)
standard_options.runner = 'DirectRunner' # Use 'DataflowRunner' for production
# subtract 24 hours from the start_time for inclusiveStartAt
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(hours=24) # Look back 24 hours
print(f"==> start time is {start_time.isoformat()}")
print(f"==> end time is {end_time.isoformat()}")
with beam.Pipeline(options=options) as pipeline:
print("Pipeline object created")
# Read from the Spanner change stream
change_stream_records = (pipeline | "ReadChangeStream" >> ReadChangeStreamFromSpanner(
project_id=project_id,
instance_id=instance_id,
database_id=database_id,
changeStreamName=change_stream_name,
metadataDatabase=metadata_database_id,
metadataInstance=instance_id,
inclusiveStartAt=start_time.isoformat(),
inclusiveEndAt=end_time.isoformat()
))
# Print each change record
change_stream_records | "PrintRecords" >> beam.ParDo(PrintChangeRecord())
print("Pipeline execution completed")
if __name__ == "__main__":
# Replace with your actual project, instance, database, and change stream details
PROJECT_ID = "xxxxxx"
INSTANCE_ID = "yyyyyy"
DATABASE_ID = "zzzzzz"
CHANGE_STREAM_NAME = "PatientsChangeStream"
METADATA_DATABASE_ID = "metadata" # A separate database for metadata
CREDENTIALS_PATH = "c:/xxxxx/yyyyyyy.json"
try:
print("Starting pipeline...")
run_pipeline(PROJECT_ID, INSTANCE_ID, DATABASE_ID, CHANGE_STREAM_NAME, METADATA_DATABASE_ID, CREDENTIALS_PATH)
print("Script completed")
except KeyboardInterrupt:
print("Script killed by user")
except Exception as ex:
print(f"An exception occurred: {str(ex)}")
</code></pre>
|
<python><apache-beam><google-cloud-spanner>
|
2025-10-31 05:21:17
| 0
| 525
|
Joe P
|
79,805,402
| 1,797,628
|
Python tempfile TemporaryDirectory path changes multiple times after initialization
|
<p>I am using <a href="https://docs.python.org/3/library/tempfile.html" rel="nofollow noreferrer">tempfile</a> with Polars for the first time and getting some surprising behavior when running it in a serverless Cloud Function-like environment. Here is my simple test code:</p>
<pre><code>try:
with tempfile.TemporaryDirectory() as tempdir:
logger.info(f'Writing output to temp directory: {tempdir}')
my_dataframe.collect().write_csv(tempdir)
logger.info(f'File was saved to: {tempdir}')
except Exception as e:
raise IOError(f'Failed to write output CSV: {str(e)}')
</code></pre>
<p>This fails, but the logs show something strange:</p>
<blockquote>
<p>Writing output to temp directory: /tmp/tmp2q_iiq5z</p>
</blockquote>
<blockquote>
<p>I/O error: Failed to write output CSV: No such file or directory (os
error 2): /tmp/tmpu8j0xpcj</p>
</blockquote>
<p>It seems that somehow the tempdir path gets changed in between the log and the exception log (and possibly even the write_csv() call! What is going on here, and could this be the reason it cannot write the CSV file?</p>
|
<python><serverless><python-polars><temporary-files>
|
2025-10-31 04:42:19
| 1
| 2,585
|
starmandeluxe
|
79,805,124
| 6,457,407
|
SWIG code doesn't compile on Github/MacOS machine. Works fine on Linux, Windows
|
<p>My swig code has stopped compiling when being built on a Github MacOS runner.</p>
<p>Since the SWIG code uses numpy, it has the required numpy initialization:</p>
<pre><code>%init %{
import_array();
}
</code></pre>
<p>Swig turns this into the following C code:</p>
<pre><code>SWIG_init(void) {
...
import_array();
...
}
</code></pre>
<p><code>import_array()</code> is a macro defined in the numpy header file <code>__multiarray_api.h</code>. It initializes numpy, but does <code>return NULL;</code> if the initialization fails.</p>
<p>The Windows and Linux gcc compilers seem to be okay with a function that is implicitly declared to return an <code>int</code> returning NULL. The MacOS compiler complains about converting a pointer to an integer.</p>
<p>Have others run into this problem? Is there an easy solution?</p>
|
<python><gcc><swig>
|
2025-10-30 18:55:05
| 0
| 11,605
|
Frank Yellin
|
79,805,035
| 12,281,892
|
PyScript on Github pages doesn't see files with relative paths
|
<p>I have a github repo that is hosted on GitHub Pages (project site). I’m migrating a small PyScript demo to GitHub Pages and hitting 404s when preloading files via <code><py-config></code>. This works locally and also worked previously with an older PyScript alpha*. The github repo contains an <code>index.html</code> and static files live in a <code>src/</code> folder. I use PyScript, a recent release, 2025.10.3.</p>
<p>For illustration, this is how my code looks:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>demo</title>
<!-- -->
<link rel="stylesheet" href="https://pyscript.net/releases/2025.10.3/core.css" />
<script type="module" src="https://pyscript.net/releases/2025.10.3/core.js"></script>
<py-config>
{
"packages": ["numpy"],
"files": {
"/src/module.py": "./module.py",
"/src/data.npz": "src/data.npz"
},
"paths": ["/src"]
}
</py-config>
</head>
<py-script>
import module # expected to work because it is on sys.path
import numpy as np
arr = np.load("/src/data.npz") #e.g.
print("OK")
</py-script>
</html>
</code></pre>
<p>All works ok when I run it locally, the module and the file get both loaded. However, when I push it to GitHub Pages, the network panel shows 404s for those files. The browser tries URLs that don’t exist for the deployed site (e.g., resolving from the domain root, or from the wrong folder, or from hidden files -- the repo itself is not public). What is the correct way of dealing with the files? How can I serve them or source them without absolute paths (I would like to avoid putting out the files and hardwiring the path) in the same way as it used to in pyscript?</p>
<p>Thanks.</p>
<p>*That was implemented as:</p>
<pre class="lang-html prettyprint-override"><code> <py-env>
- numpy
- paths:
- src/module.py
- src/data.npz
</py-env>
</code></pre>
|
<python><file><github-pages><pyscript>
|
2025-10-30 16:59:09
| 1
| 2,550
|
My Work
|
79,804,988
| 14,773,854
|
Windows API: Software breakpoints are not hit
|
<p>I am using Python to write a rudimentary debugger that will use Windows APIs to debug C source files. I am stuck on creating the software breakpoints. I did some research and found <a href="https://www.codeproject.com/articles/Writing-Windows-Debugger-Part-2" rel="nofollow noreferrer">this article</a> that explains setting software breakpoints involves setting <code>0xCC</code> to a location.</p>
<pre class="lang-py prettyprint-override"><code>original = ctypes.c_ubyte()
bytes_read = SIZE_T()
handle = OpenProcess(PROCESS_ALL_ACCESS, False, self.process_id)
# Read original byte
ReadProcessMemory(handle, address, ctypes.byref(original), 1, ctypes.byref(bytes_read))
# Write INT3
int3 = ctypes.c_ubyte(0xCC)
bytes_written = SIZE_T()
WriteProcessMemory(handle, address, ctypes.byref(int3), 1, ctypes.byref(bytes_written))
self.breakpoints[address] = original.value
CloseHandle(handle)
</code></pre>
<p>Getting the address using comtypes that uses the MSDIA SDK to parse a PDB file:</p>
<pre class="lang-py prettyprint-override"><code>source_files = self.session.findFile(None, r"/path/to/source/file.c", 0x2 | 0x4)
source_file = source_files.Item(0)
line_numbers = self.session.findLinesByLinenum(source_file.compilands[0], source_file, line, 0)
line_number = line_numbers.Next(1)[0]
address = self.base_address + line_number.addressOffset # + line_number.virtualAddress
</code></pre>
<p>Logic to handle software breakpoints:</p>
<pre class="lang-py prettyprint-override"><code>def handle_software_breakpoint(self, thread_id, exception_address):
"""Handle a software breakpoint hit (INT3)"""
if exception_address not in self.breakpoints:
print("[!] Breakpoint hit at unknown address 0x%X" % exception_address)
return
# Fix instruction pointer to re-execute original instruction
context = CONTEXT32()
context.ContextFlags = CONTEXT_ALL
thread_handle = OpenThread(THREAD_ALL_ACCESS, False, thread_id)
GetThreadContext(thread_handle, ctypes.byref(context))
context.Eip -= 1 # Rewind past INT3
SetThreadContext(thread_handle, ctypes.byref(context))
CloseHandle(thread_handle)
original_byte = self.breakpoints[exception_address]
handle = OpenProcess(PROCESS_ALL_ACCESS, False, self.process_id)
# Restore original instruction byte
orig = ctypes.c_ubyte(original_byte)
size = SIZE_T()
WriteProcessMemory(handle, exception_address, ctypes.byref(orig), 1, ctypes.byref(size))
CloseHandle(handle)
print("[*] Software breakpoint handled at 0x%X" % exception_address)
</code></pre>
<p>However the only breakpoint that is handled is at address <code>0x77BF1B52</code>. How can I ensure that the breakpoints are reached? Does one have to account for comments when reading memory addresses from a line in source code?<br />
Python/C/C++ solutions are acceptable.</p>
<p>Below is the debug loop:</p>
<pre class="lang-py prettyprint-override"><code>def run_debug_loop(self):
print("[+] Starting debug loop... waiting for events.")
debug_event = DEBUG_EVENT()
while WaitForDebugEvent(ctypes.byref(debug_event), INFINITE):
code = debug_event.dwDebugEventCode
pid = debug_event.dwProcessId
tid = debug_event.dwThreadId
if code == CREATE_PROCESS_DEBUG_EVENT:
print("[+] Process created, setting breakpoints...")
# Example: set a breakpoint at main()
self.set_software_breakpoint(self.io_addresses["B"])
ResumeThread(self.thread_handle)
elif code == EXCEPTION_DEBUG_EVENT:
record = debug_event.u.Exception.ExceptionRecord
exc_code = record.ExceptionCode
addr = record.ExceptionAddress
if exc_code == EXCEPTION_BREAKPOINT:
self.handle_software_breakpoint(tid, addr)
elif exc_code == EXCEPTION_SINGLE_STEP:
print("[!] Unexpected single-step (from restored breakpoint)")
elif code == 5: # EXIT_PROCESS_DEBUG_EVENT
print("[+] Process exited.")
break
ContinueDebugEvent(pid, tid, DBG_CONTINUE)
</code></pre>
|
<python><c><winapi>
|
2025-10-30 16:16:26
| 1
| 357
|
user14773854
|
79,804,948
| 5,931,672
|
Understanging right_bases of peak_prominences
|
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from scipy.signal import find_peaks, peak_prominences
# I have my array x
peaks, _ = find_peaks(x)
prominences, left_bases, right_bases = peak_prominences(x, peaks)
contour_heights = x[peaks] - prominences
plt.plot(x)
plt.plot(peaks, x[peaks], "x")
plt.vlines(x=peaks, ymin=contour_heights, ymax=x[peaks])
plt.hlines(contour_heights, xmin=left_bases, xmax=right_bases)
plt.show()
</code></pre>
<p>And the resulting image is this:
<a href="https://i.sstatic.net/iSuUCxj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iSuUCxj8.png" alt="enter image description here" /></a></p>
<p>Indeed, <code>left_bases = array([ 4, 10, 16, 22, 29, 36, 42])</code> whereas <code>right_bases = array([47, 47, 47, 29, 47, 47, 47])</code>.</p>
<p>Indeed, this is not exactly what I was looking, I was expecting for example the right value of <code>right_bases[0] = 10</code> for example. Why is this happening? How does the code determine right and left bases?</p>
|
<python><scipy>
|
2025-10-30 15:45:19
| 1
| 4,192
|
J Agustin Barrachina
|
79,804,879
| 6,772,468
|
How to fix "AttributeError: 'Series' object has no attribute 'codes'" using pandas.Categorical
|
<p>I am trying to convert a string that is a categorical data type into a numeric. I found out that I can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Categorical.html" rel="nofollow noreferrer">pandas.Categorical</a>,
unfortunately, accessing the codes attribute give me an error.</p>
<p>Here is a minimal example of my working code</p>
<pre><code>>>> sessions_df = pd.read_csv("fitness_sessions_2025.csv")
>>> session_df.head()
user_name sex age experience_level
0 Alice F 29 Intermediate
1 Alice F 29 Intermediate
2 Alice F 29 Intermediate
>>> sessions_df["experience_level"].unique()
array(['Intermediate', 'Beginner', 'Advanced'], dtype=object)
>>> sessions_df["experience_level"] = pd.Categorical(
... sessions_df["experience_level"],
... categories=['Beginner', 'Intermediate', 'Advanced'],
... ordered=True)
>>> sessions_df["experience_level"].codes
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_24656\2056368924.py in <module>
----> 1 sessions_df["experience_level"].codes
~\AppData\Roaming\Python\Python39\site-packages\pandas\core\generic.py in __getattr__(self, name)
6202 ):
6203 return self[name]
-> 6204 return object.__getattribute__(self, name)
6205
6206 @final
AttributeError: 'Series' object has no attribute 'codes'
</code></pre>
<p>Can anyone please explain what I am doing wrong and advise the best approach?</p>
|
<python><pandas><categorical-data>
|
2025-10-30 14:32:42
| 1
| 1,375
|
JA-pythonista
|
79,804,833
| 4,498,251
|
Pandas: Why do certain errors appear?
|
<p>When executing a simple group by statement on a pandas dataframe, there is a weird error (about details like whether or not to show the group by columns in the resulting table): <a href="https://stackoverflow.com/questions/77969964/deprecation-warning-with-groupby-apply">Deprecation Warning with groupby.apply</a></p>
<p>Yes, I get that there is an article referred to in this question that shows why doing the naive, simple thing (yes, show the group columns) can lead to problems (so please do not explain why there is an issue on Pandas, I get that from the article linked in that question).</p>
<p>What I don't get is the following: When putting myself into the shoes of a regular poor user who just wants to execute a group by statement on a simple table (without any fancy extensions), then in TONS of other languages, I can simply do that by... well... by simply executing a GROUP BY statement. Pandas seems to be the only description language of data where this can lead to a problem.</p>
<p>Can somebody explain to me why there is a problem in Pandas while there is no problem in any of the other SQL like languages with GROUP BY?</p>
<p>It appears to me that this "issue" is not an issue of the user but rather, it's an issue of Pandas... Is that correct? If so, why don't they simply fix it and execute group by statements as any other SQL like language would do?</p>
|
<python><pandas><dataframe>
|
2025-10-30 13:47:28
| 0
| 1,023
|
Fabian Werner
|
79,804,795
| 5,547,553
|
How to count the length of a list of dicts, containing bytes as well in python
|
<p>I have composed this list of dicts:</p>
<pre class="lang-py prettyprint-override"><code>from pprint import pprint
image = b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\x16\x00\x00\x00\x16\x08\x02\x00\x00\x00K\xd6\xfbl\x00\x00\x00\tpHYs\x00\x00\x0e\xc4\x00\x00\x0e\xc4\x01\x95+\x0e\x1b\x00\x00\x02\xbfIDAT8\x8d\xa5TIkTA\x10\xfe\xaa_\xbf\x99q\x8cD\xc7\x89\xc4\x8c\xe0\x96\x83\x1aQAD\x02\x92\x1f\xe0E\x04\xbd(x\x12%\x1e\x04o\xe6\xe0E\x0fzR\x0fn\x88\x1e=\xa9\x10E\\0(\x92 \x88{b\x14\x8d!.\x89&1\x99,\xe3\x9bL\xcf\xebW\xe5!\x93\xcc\xaa \xf9N\xdd\xd5U_U\x7f\xd5]\xc4\xcc\x98\x1b\xd4\x1c\xe3\x01\xe8\xbf\x9e\x08\xb3\x1d$ICU\x93\x8e\x03\xf4\x1f\x14<\xd5\x15\x8c^B\xe6\xa5\xe8\x84\xa88I\n\xf2[E6\xa9\xc5\xcd\xca]Z\xeeOEZH\xe0\x0f\x9d@\xea\x9e\xc4\x8f"\xbcL\xf1g\xa6Z J\x18\'\xf3\\\xc6\x1e\xa8X\xb3\xb3h\xcf?(\xd8~?(^\x9b\xd4\x9d%\'D<\x00\x08\xa8\xa0L\xf6d\xf0\x1c\xc5\x0e\xeb\xf8\xa1B\x8a\xbc\x9c\xc1\xc8y\x99\xbc\x15\xe8:\xe2.d\x9f\x8ai\x07\x8f\x00\x0c\x00b\x11\x0c\x8b\xdf)\xd1\xe52t\x82\xbd\x8e\nU\x88\x1d\xb1=[\xc0\x9e\xc5\x02\x8e\xae\xd7\xd1*\xa5fr\x90+\xec3\xb3\xe3@\xfcI\x1a{\x86p\x83[\xffhV\xe0\x9c#\x8f_\x07{\x004R\xe3?\xbaVml=u\xfa\xfd\xf4\t\xc4\xec\xd9\xdf\xd1\xb4\xfda6m\x82\xd4\x00\x00\x98nN\xbf(\xbdHam/:\'\x7f\x8d\x9a[w>\x99\xd1/f|\xd8L$\xef\xb7\xfdx\xdd\x99\x1c\xec}\xa5m\x7fNw\xaf}\xd6?\xa7\x16\xf9\xdfd\xc6\xb4n\xb5\xd2\x9a6\xadu\xc2A\x0f\x02\x00\xd8\xb0\xc6\xed\x1f\xa4\xa5\x8b\xbd|\xeb\xb2_K)\x04\xf9\xd6\xaeH\xe0Mk\xacn\t\x80\x1c\xed\xed\x8b\xd1l\x16\xae\x9e\xcd\x02HP\xd6\x11\x9d{3\x03C\x12\x04X\xb9L\xc2\xa1|@u\x15jb(\x04\x85\xeaJ)Tt\xeb\xf4\xe2\xe4\xe5 9\x81\x8a\xe8\xeeU\xc7/\xcc$\x9f\xd7XJA\xd5\xbb\x01\x07\xc0\x99\x16}\xee\x9a\xdf\xd7/%\xf1o?\xca\xdd\'\xfe\xb1f\r\x00nBUm\xcbW4\xfb:\xed\xcf\x16I^\x05`\x03\xbar\xc3\x8e\x8e\xf1\xe6\x06U\x13S_\x07\xf8\xf5\x87\xa0~9\xed\xdb\x11"\x12\x00\x94\xb8\xa0\x17\xee\xaa@!<\xe5\xf7\xed\xa4\xcc\xab\xe9\xado\xf1\xaeGR\x9e\xc4\x17\xa95+I\xa9\\]\xb4p\xafN\x9c.\xfc\xb8E\xdfL\x82\t\xfb\xfd\x00\xbc\xc7\x95\xc5\x00!\xb6\xdf\xad=\x0er\x8a\xaceSK\xec\xd8MI\x9eG\xa6\xbb\xc0\xa80\xbfI\xd5\x1cq\xe67\xa2\x0c\xe5\x149p\xb6\x1f\xe6#\xe47T\x8c\xe65\x90\x13\xab\xe8V\x81\xc2ZkL&\x93\xc9\x1ac\xac\xb5\xccB\x04\xed\xba\xe1P(\x12\tG"\x11\xc7q\x88\x8a&X\xd1\xd4\x12\x11\xe6\xc0\xcf\xfa\xc6\x98tz\xcaZ\xcb\xccD\xa4\xb5\x0f\x81\xa3\xc8u\xb5\xe3\x14\t\x01\xe0\x0f\xa6\x9aZ\xb2\x12zl\x8b\x00\x00\x00\x00IEND\xaeB`\x82'
question = "What is the meaning of life?"
messages = [{"role": "user",
"content": [{"image": {"format": "png",
"source": {"bytes": image}
}
},
{"text": question}]
}]
pprint(messages)
</code></pre>
<p>How do I count the length of messages?<br>
Simple <code>len(messages)</code> returns 1, since it is a list of 1 elements.<br>
Then <code>len(messages[0])</code> returns 2, since "content" is a list of 2 elements.<br>
Tried <code>json.dumps()</code>, but it fails with TypeError: Object of type bytes is not JSON serializable.<br>
Pretty printing to iostream does fail as well.<br>
My best idea was to count the lengths of question, image and messages separately, and sum them, but I'm afraid that bytes' representation will take up more space...<br>
Any other ideas?</p>
|
<python>
|
2025-10-30 13:09:24
| 0
| 1,174
|
lmocsi
|
79,804,255
| 1,144,588
|
Reference column named "*" in Polars
|
<p>I have a Polars DataFrame with a column named <code>"*"</code> and would like to reference just that column. When I try to use <code>pl.col("*")</code> it is interpreted as a wildcard for "all columns." Here's an example:</p>
<pre><code>import polars as pl
df = pl.DataFrame({"A": [1, 2], "B": [3, None], "*": [4, 5]})
print(df.select(pl.col("*"))) # --> prints all columns
</code></pre>
<p>The <a href="https://docs.pola.rs/api/python/stable/reference/expressions/col.html#polars-col" rel="nofollow noreferrer">documentation</a> for <code>pl.col</code> doesn't include any recommendation on escaping the asterisk. My naive attempt at <code>pl.col(r"\*")</code> fails.</p>
<p>Edit:</p>
<p>There's a bit of <a href="https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem">XY problem</a> here as my original underlying goal was to <em>drop</em> the <code>"*"</code> column with <code>df.drop("*")</code>, but I quickly realized that the call relied on the behavior of <code>pl.col</code>. jqurious correctly identified that goal in <a href="https://stackoverflow.com/a/79804542/1144588">their answer</a>.</p>
|
<python><dataframe><python-polars>
|
2025-10-29 21:56:41
| 4
| 359
|
Sam
|
79,804,144
| 4,503,546
|
Automating X (Twitter) posts using scheduler and lines from a CSV file
|
<p>I have put together a simple program to automate posting to x.com (formerly Twitter) with a free X account (i.e. you don't have to pay for an account to run this code).</p>
<pre><code>import tweepy
BEARER_TOKEN = ''
API_KEY = ''
API_SECRET_KEY = ''
ACCESS_TOKEN = ''
ACCESS_TOKEN_SECRET = ''
client = tweepy.Client(BEARER_TOKEN, API_KEY, API_SECRET_KEY, ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
client.create_tweet(text='This is the text I would like to post to X')
</code></pre>
<p>Note that, to run this code, you will need to apply for developer status on X.com (<a href="https://developer.x.com/en/portal/dashboard" rel="nofollow noreferrer">https://developer.x.com/en/portal/dashboard</a>) and generate the various keys required in the code below. You can get these keys by navigating to the X developer portal then going to "Projects & Apps" (left hand side), selecting your project, and then selecting the "Keys and tokens" tab (middle of the screen) and clicking the appropriate buttons. You might have to authenticate yourself on the settings tab (left of Keys and token tab) first in order to generate the keys and tokens. You will also have to install tweepy via pip.</p>
<p>I would like to improve this code as follows:</p>
<ol>
<li><p>Instead of hard coding the post text, I'd like to have the program loop through line items in a CSV file or similar. I'd like to know how to do this in general including how to do it so that the code won't keep printing the same line again and again. In other words, once the code posts the first line, it will move to the second line.</p>
</li>
<li><p>I would like to set some kind of scheduler to run this program every 30 minutes or so. So at 8AM the code posts the first line then at 8:30AM the second line then at 9AM the third and so on. I am using Windows so perhaps best to use the task scheduler?</p>
</li>
</ol>
<p>To clarify, in the ideal scenario, I will put together a CSV file with 10, 20, 30, etc. lines that include posts for that day. I will then turn the program/scheduler on and it will gradually loop through each line using some user defined time interval (e.g. 30 minutes).</p>
|
<python><twitter><automation>
|
2025-10-29 19:11:23
| 0
| 407
|
GC123
|
79,804,134
| 7,993,601
|
Python Google style doc string for generic class Type Parameter
|
<p>I want to annotate a type parameter for a generic dataclass of mine with a Google style docstring to both support generating documentation <strong>and</strong> mouse hovering within VS Code (and other editors/IDEs). If I use numpy style docstrings it appears to work for mouse hovering, however when I try to use Google style, it doesn't work. So far I've found very little to no documentation for how to annotate a type parameter with Google style docstrings and the closest I've found is:</p>
<pre class="lang-py prettyprint-override"><code>@final
@dataclass(frozen=True)
class MyDataclass[T]:
"""
A generic frozen dataclass.
Type Parameters:
T: A type parameter for this dataclass.
Attributes:
attribute (str): This dataclass's `string` attribute.
"""
attribute: str
</code></pre>
<p>I just want to know if this is actually correct or if Google docstrings even support annotating type parameters.</p>
|
<python><python-typing><docstring>
|
2025-10-29 18:57:10
| 1
| 843
|
Snap
|
79,804,104
| 10,203,572
|
Memory usage keeps increasing when extracting embeddings via sentence-transformers
|
<p>I have a set of about 100M paragraph-sized strings (multilingual) I am extracting embeddings for, but the memory usage keeps increasing until I start overflowing into disk swap:</p>
<pre><code>model = SentenceTransformer("Qwen/Qwen3-Embedding-0.6B",
tokenizer_kwargs={"padding_side": "left"})
embeddings = []
for samples_page in my_paginated_samples_loader:
embeddings.extend(model.encode(samples_page))
</code></pre>
<p>I have tested my sample loader for any memory leaks but it does not seem to be the issue, so it has to be something with Sentence Transformers.</p>
<p>Does the issue seem to be something obvious like the way I am running it being wrong? Or should I try to troubleshoot this in some specific way?</p>
|
<python><macos><nlp><apple-silicon><sentence-transformers>
|
2025-10-29 18:09:52
| 1
| 1,066
|
Layman
|
79,803,382
| 1,335,340
|
Transform DataFrame containing ID pairs into a list of sets
|
<p>I have a Pandas DataFrame with the following structure</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>left_id</th>
<th>right_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>b</td>
</tr>
<tr>
<td>c</td>
<td>a</td>
</tr>
<tr>
<td>x</td>
<td>y</td>
</tr>
</tbody>
</table></div>
<p>I need to transform this into a list of sets, like</p>
<pre><code>[
{'a', 'b', 'c'},
{'x', 'y'}
]
</code></pre>
<p>the first two rows should be combined as a single set, because row 1 has id <code>a</code> and <code>b</code> and row 2 has ids <code>c</code> and <code>a</code> which, in this df, means the three IDs are related.</p>
<p>What is the right way to do this?</p>
|
<python><pandas><dataframe>
|
2025-10-29 01:57:32
| 3
| 917
|
Joe F.
|
79,803,302
| 506,230
|
Running `pip install -e .` does not install project to PATH
|
<p>I'm working on a wrapper around a tool called Coverity. I want to provide some syntactic sugar around the existing commands, so I'm using Python's Click library to work up a simple CLI.</p>
<p>The problem comes when I follow the instructions on <a href="https://click.palletsprojects.com/en/stable/entry-points/" rel="nofollow noreferrer">Click's Packaging Entry Points doc page</a>, which says to create a virtual env, then to run <code>pip install -e . </code>.</p>
<p>That command succeeds, or at least appears to, but after running it, my CLI command <code>cov</code> is not available. Details of the directory and files below.</p>
<p>I'm on an Apple Silicon Mac, running Sonoma 14.6.1.</p>
<p>Here's a breakdown of what I've done so far.</p>
<pre><code>$ which python3
/usr/local/bin/python3
$ python3 -m venv venv
$ source ./venv/bin/activate
$ which python
~/Desktop/Scripts/coverity-client/venv/bin/python
$ python --version
Python 3.12.1
$ which pip
~/Desktop/Scripts/coverity-client/venv/bin/pip
$ pip install -e .
Obtaining ~/Scripts/coverity-client
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... done
Preparing editable metadata (pyproject.toml) ... done
Requirement already satisfied: click>=8.3.0 in ./venv/lib/python3.12/site-packages (from cov==1.0.0) (8.3.0)
Building wheels for collected packages: cov
Building editable for cov (pyproject.toml) ... done
Created wheel for cov: filename=cov-1.0.0-0.editable-py3-none-any.whl size=1431 sha256=4e62cae495d50738c1d0a45bcc2a373d341e8e473563e1384523b25b45a1dc12
Stored in directory: /private/var/folders/rk/7x0cwmtd009024nm3qdz9jxr0000gn/T/pip-ephem-wheel-cache-i4s39lw9/wheels/93/81/02/9f29175e8cf2a8f44a4fd6285a7a954922e3ab9f6a3326653d
Successfully built cov
Installing collected packages: cov
Attempting uninstall: cov
Found existing installation: cov 1.0.0
Uninstalling cov-1.0.0:
Successfully uninstalled cov-1.0.0
Successfully installed cov-1.0.0
$ which cov
cov not found
</code></pre>
<p>The command <code>pip show cov</code> indicates that <code>cov</code> was installed to the site-packages folder:</p>
<pre><code>pip show cov
Name: cov
Version: 1.0.0
Summary: Coverity command line tools
Home-page:
Author:
Author-email:
License:
Location: ~/Desktop/Scripts/coverity-client/venv/lib/python3.12/site-packages
Editable project location: ~/Desktop/Scripts/coverity-client
Requires: click
Required-by:
</code></pre>
<p>and</p>
<pre><code>$ ls -l venv/lib/python3.12/site-packages
__editable__.cov-1.0.0.pth
click
click-8.3.0.dist-info
cov-1.0.0.dist-info
pip
pip-25.3.dist-info
</code></pre>
<p>But not to the bin folder.</p>
<pre><code>$ ls -l venv/bin
Activate.ps1
activate
activate.csh
activate.fish
build
pip
pip3
pip3.12
python -> python3.12
python3 -> python3.12
python3.12 -> /Library/Frameworks/Python.framework/Versions/3.12/bin/python3.12
</code></pre>
<p>Lastly I can drop into a Python shell, while the venv is active, and import my project:</p>
<pre><code>$ python
Python 3.12.1 (v3.12.1:2305ca5144, Dec 7 2023, 17:23:38) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
>>> from cov import cov
>>> cov.build()
~/coverity-tools/2025.9.0/bin/
</code></pre>
<p>Can anyone help me understand why isn't working as expected?</p>
<h3>DETAILS</h3>
<p>My project directory:</p>
<pre class="lang-bash prettyprint-override"><code>tree
.
├── pyproject.toml
├── requirements.txt
└── src
└── cov
├── __init__.py
└── cov.py
</code></pre>
<p>My <code>pyproject.toml</code>.</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "cov"
version = "1.0.0"
description = "Coverity command line tools"
requires-python = ">=3.11"
dependencies = [
"click>=8.3.0",
]
[project.scripts]
build = "cov.cov:build"
</code></pre>
<p>cov.py</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
import click
@click.command()
@click.option('--bin_path', default='~/coverity-tools/2025.9.0/bin/', help='Path to coverity tools.')
def build(bin_path: str, ):
click.echo(bin_path)
</code></pre>
|
<python><command-line-interface>
|
2025-10-28 22:16:33
| 0
| 5,055
|
commadelimited
|
79,803,236
| 6,822,178
|
Pandas DataFrame with a hundred million entries and counting the number of identical characters in strings
|
<p>I have a <code>pandas</code> DataFrame (<code>df</code>) with two columns (namely <code>Tuple</code> and <code>Set</code>) and approximately 100,000,000 entries. The <code>Tuple</code> column data is a string of exactly 9 characters. The <code>Set</code> column data is an ordinal integer from <code>0</code> to approximately <code>1,000</code> that assigns the <code>Tuple</code> to the corresponding <code>Set</code>. Each <code>Set</code> contains around 100,000 consecutive tuples.</p>
<p>Now I need to label each <code>Tuple</code> (in a new column <code>Label</code>) as <code>1</code> if it contains at least 5 identical characters, and <code>0</code> otherwise.</p>
<p>I'm trying to find a way to do it without taking a lifetime...</p>
<p>My first naive attempt (after importing <code>Counter</code> from <code>collections</code>) was</p>
<pre class="lang-py prettyprint-override"><code>for j in tqdm(df.index):
cond = max(Counter(df[df.index==j].Tuple.values[0]).values()) >= 5
df.loc[df.index==j, "Label"] = int(cond)
</code></pre>
<p>but according to <code>tqdm</code>, it'd take approximately 3,350 hours on my MacBook Pro (almost 5 months, lol).</p>
<p>So, since the <code>Tuple</code>s are assigned to <code>Set</code>s of approximately 100,000 entries, I thought to label them within each separate <code>Set</code> and put everything back together afterwards</p>
<pre class="lang-py prettyprint-override"><code>dfs = []
for i in tqdm(df.Set.unique()):
_df = df[df.Set==i].copy(deep=True)
for j in tqdm(_df.index, leave=False):
cond = max(Counter(_df[_df.index==j].Tuple.values[0]).values()) >= 5
_df.loc[_df.index==j, "Label"] = int(cond)
dfs.append(_df)
</code></pre>
<p>According to <code>tqdm</code>, it'd take 12 hours, more or less. Far better than 5 months and quite affordable! But I was wondering whether there was a more efficient way (maybe some useful <code>numpy</code> or <code>pandas</code> functions I'm not aware of, and that could speed everything up a little bit more).</p>
<p>Any suggestions?</p>
<hr />
<h1>Update</h1>
<p>As suggested, I provide a sample dataset and imports</p>
<pre class="lang-py prettyprint-override"><code>from collections import Counter
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
n = int(1e4) # this will be ~1e8 in real cases
n_set = 10 # this will be ~1000 in real cases
df = pd.DataFrame({
"Tuple": np.random.randint(100000000, 1000000000, size=n).astype(str),
"Set": np.random.randint(n_set, size=n)
})
</code></pre>
<p>It'll take a while with <code>n=int(1e8)</code>.</p>
|
<python><pandas><dataframe><bigdata><counter>
|
2025-10-28 20:21:06
| 3
| 2,289
|
Max Pierini
|
79,803,160
| 10,017,890
|
With GCP Vertex AI, how do I do prompt management
|
<p>In AWS Bedrock, we can use AWS BEDROCK PROMPT MANAGEMENT to manage the prompt lifecycle. How do I do this in GCP VertexAI or Google AI studio</p>
<p>I have tried to use the code mentioned in the below link , but the code looks like is not working and the vertex AI package seems to be old</p>
<p><a href="https://docs.cloud.google.com/vertex-ai/generative-ai/docs/model-reference/prompt-classes#overview" rel="nofollow noreferrer">https://docs.cloud.google.com/vertex-ai/generative-ai/docs/model-reference/prompt-classes#overview</a></p>
<p>I tried the below code</p>
<pre><code>import vertexai
from vertexai import types
from google.genai import types
prompt = types.Prompt(
prompt_data=types.PromptData(
contents=[genai_types.Content(parts=[genai_types.Part(text="Hello, {name}! How are you?")])],
variables=[
{"name": genai_types.Part(text="Alice")},
{"name": genai_types.Part(text="Bob")},
],
model="your-model",
),
)
</code></pre>
<p><strong>Update on the above code:</strong></p>
<p>I kept on trying different options, and was able to get the above code also working as below. I had to import the types from a protected member within vertexai which I felt is not a good thing. I would really need some guidance and help on this.</p>
<pre><code>import vertexai
from google.genai import types as genai_types
from vertexai._genai import types
# Instantiate GenAI client from Vertex SDK
# Replace with your project ID and location
client = vertexai.Client(project='xxxx', location='us-central1')
prompt = types.Prompt(
prompt_data=types.PromptData(
contents=[genai_types.Content(parts=[genai_types.Part(text="Hello, {name}! How are you?")])],
system_instruction=genai_types.Content(parts=[genai_types.Part(text="Please answer in a short sentence.")]),
variables=[
{"name": genai_types.Part(text="Alice")},
],
model="gemini-2.5-flash",
),
)
prompt_resource = client.prompts.create(
prompt=prompt,
)
print(prompt_resource)
</code></pre>
|
<python><google-cloud-platform><google-cloud-vertex-ai><google-generativeai>
|
2025-10-28 18:41:17
| 1
| 1,860
|
Rajib Deb
|
79,803,079
| 1,946,418
|
Python install exe cmdline options seems not enough?
|
<p>Trying to install Python installation exe on several machines, so looking for some cmdline help</p>
<p><a href="https://www.python.org/ftp/python/3.14.0/python-3.14.0-amd64.exe" rel="nofollow noreferrer">https://www.python.org/ftp/python/3.14.0/python-3.14.0-amd64.exe</a></p>
<p><code>python-3.14.0-amd64.exe /help</code> opens the following dialog</p>
<p><a href="https://i.sstatic.net/l4o3bb9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l4o3bb9F.png" alt="python-exe-cmdline-options" /></a></p>
<p>I would like to customize the install location, and few other options (add to path, etc.).</p>
<p>Anyone know if there are some cmdline options available? TIA</p>
<p>Note: Using chocolatey, or pyenv-win isn't an option in my case. Need to stick with pure Powershell</p>
|
<python><windows><exe>
|
2025-10-28 16:51:35
| 0
| 1,120
|
scorpion35
|
79,803,053
| 16,688,854
|
Asynchronous listening and processing in a Pyside app
|
<p>I am having difficulties integrating asyncio with Pyside.</p>
<p>What I want to acheive:</p>
<ul>
<li>I have several emitters (up to 30) sending messages independently every few milliseconds (200ms) in multicast.</li>
<li>I have a python PySide app that needs to listen to these incoming messages, process them, and update a map plot accordingly (move shapes, change colors of markers, etc...).</li>
</ul>
<p>Because I cannot share what I am working on for confidentialy purposes, I have devised a proxy example that is simpler and close enough to my actual need that I can share here.</p>
<p>In this proxy example:</p>
<ul>
<li>I have an emitter script that sends messages indefinitely on two ports, at a rate that can be modified (here the sleep time is of 1s). This simulates my sensors that emit messages. In this script, messages are sent each time one after the other. To be close to reality I guess we ought to have two separate scripts run in two different consoles. But I think this one can do the trick here. The messages consist in random numbers in this example.</li>
<li>On the receiving side, I want to create an app with PySide that receives those messages, processes them (multiply the numbers by 100) and displays them in two text boxes, each associated with a different port.</li>
</ul>
<p><a href="https://i.sstatic.net/irAe7aj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/irAe7aj8.png" alt="enter image description here" /></a></p>
<p>I am looking at using an asynchronous approach with asyncio here because I think my problem is mainly I/O-bound. The processing stage is minimal. In my real example it would amount to updating shapes on a satellite view map (leaflet via javascript), maybe doing a few calculations on the received data before that, but nothing too CPU-intensive.
I also chose to embed my plot in a PySide app to be able to add more buttons and features later on.
Please feel free to indicate any other solution that would be more suitable (multiple threads? multiple processes?).</p>
<p>Anyway, I have tried the following qt doc <a href="https://doc.qt.io/qtforpython-6/examples/example_async_minimal.html" rel="nofollow noreferrer">minimal example</a> but it does not work.</p>
<p>And now I am calling for your help.</p>
<p>Below are the emitter script as well as the code base for the PySide app, without the asynchronous listening and processing stages (I asked Le Chat to generate the full Pyside app but it never works that's why I am giving you only the base code to fill in here):</p>
<p>EMITTER.py</p>
<pre><code>import socket
import time
import struct
import random
def send_multicast_messages(port1, port2, multicast_group='224.1.1.1'):
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
ttl = struct.pack('b', 1)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, ttl)
try:
while True:
# Generate a random value for port1
value1 = random.randint(1, 100)
message1 = f"{value1}"
sock.sendto(message1.encode(), (multicast_group, port1))
print(f"Sent to port {port1}: {message1}")
# Generate a random value for port2
value2 = random.randint(1, 100)
message2 = f"{value2}"
sock.sendto(message2.encode(), (multicast_group, port2))
print(f"Sent to port {port2}: {message2}")
time.sleep(1)
except KeyboardInterrupt:
print("\nExiting the program.")
finally:
sock.close()
if __name__ == "__main__":
port1 = 5000
port2 = 6000
send_multicast_messages(port1, port2)
</code></pre>
<p>APP.py</p>
<pre><code>import sys
from PySide6.QtWidgets import (
QApplication, QMainWindow, QVBoxLayout, QTextEdit, QLabel, QWidget
)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Port Monitor")
self.setGeometry(100, 100, 400, 300)
# Create widgets
self.label1 = QLabel("Port 5000:")
self.text_edit1 = QTextEdit()
self.text_edit1.setReadOnly(True)
self.label2 = QLabel("Port 6000:")
self.text_edit2 = QTextEdit()
self.text_edit2.setReadOnly(True)
# Layout
layout = QVBoxLayout()
layout.addWidget(self.label1)
layout.addWidget(self.text_edit1)
layout.addWidget(self.label2)
layout.addWidget(self.text_edit2)
# Central widget
container = QWidget()
container.setLayout(layout)
self.setCentralWidget(container)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
</code></pre>
<p>Thank you very much in advance for your help.</p>
|
<python><pyqt><python-asyncio><pyside><multicast>
|
2025-10-28 16:16:04
| 1
| 337
|
Antoine101
|
79,803,031
| 6,024,187
|
Remove horizontal white space between square matplotlib plots
|
<p>I cannot remove the horizontal white space between a large number of subplots in matplotlib lib. How do I do this?</p>
<p>I think the problem is that the default plots are very wide, so that even when I set <code>wspace</code> to zero and matplotlib thinks it is smashing the plots together, there is still a ton of horizontal white space.</p>
<pre><code>def bad_plot() -> plt.Figure:
n_rows = 9
n_cols = 5
fig, axs = plt.subplots(
n_rows, n_cols,
# tight_layout=True,
# constrained_layout=True,
figsize=(n_rows + 1, n_cols + 1)
# gridspec_kw = {'wspace':0, 'hspace':0},
)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=0, hspace=0.1)
for nl, row in zip(range(10), axs):
gf = np.zeros((nl + 1, 1, 28, 28))
n = min(n_cols, gf.shape[0])
for i, (feat, lb) in enumerate(zip(gf[0:n], [nl] * n)):
ax = row[i]
ax.set_aspect(1)
ax.imshow(feat[0])
ax.text(0.03, 0.97, f"text: {lb:d}",
horizontalalignment='left',
verticalalignment='top',
transform=ax.transAxes,
fontsize=12,
color='w'
)
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
i += 1
while i < len(row):
# row[i].imshow(np.zeros((28, 28)))
# row[i].set_aspect(1)
# row[i].set_axis_off()
# row[i].set_visible(False)
i += 1
# fig.tight_layout()
return fig
</code></pre>
<p>You can see, from the comments littering the code, that I have tried a large number of Stack Overflow solutions to this problem and not found a solution. This is what the output looks like:</p>
<p><a href="https://i.sstatic.net/pZYWMGfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pZYWMGfg.png" alt="a figure with too much horizontal white space" /></a></p>
<p>I want all that horizontal white space gone.</p>
|
<python><matplotlib>
|
2025-10-28 15:58:16
| 1
| 897
|
Finncent Price
|
79,802,985
| 1,264,097
|
Real copies of functions in Python
|
<p>In Python 3, I have the following code snippet:</p>
<pre><code>x = [1,2,3]
test = []
for i in range(3):
def broken_func(a):
return x[i] + a
test.append(broken_func)
print('Test at creation :', test[-1](1), test[-1])
for func in test:
print('Test later :', func(1), func)
</code></pre>
<p>The idea is to create a list of functions, but my output tells me that it didn't work:</p>
<pre><code>Test at creation : 2 <function broken_func at 0x7fa95552e200>
Test at creation : 3 <function broken_func at 0x7fa95541e170>
Test at creation : 4 <function broken_func at 0x7fa95541d360>
Test later : 4 <function broken_func at 0x7fa95552e200>
Test later : 4 <function broken_func at 0x7fa95541e170>
Test later : 4 <function broken_func at 0x7fa95541d360>
</code></pre>
<p>All later code execution seems to still depend on the value of <code>x[i]</code>, suck at <code>i=2</code>, which is unwanted behavior. If I do the same thing with <code>lambda x: ...</code>, there is the same problem. I printed the "function" to show that all the three functions are indeed distinct.</p>
<p>Why does this code not work, and how to fix it?</p>
|
<python>
|
2025-10-28 15:18:57
| 1
| 697
|
R. C.
|
79,802,842
| 3,025,981
|
Adding an Object column to a polars DataFrame with broadcasting
|
<p>If I have a DataFrame, I can create a column with a single value like this:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame([[1, 2, 3]])
df.with_columns(pl.lit("ok").alias("metadata"))
</code></pre>
<pre><code>shape: (3, 2)
┌──────────┬──────────┐
│ column_0 ┆ metadata │
│ --- ┆ --- │
│ i64 ┆ str │
╞══════════╪══════════╡
│ 1 ┆ ok │
│ 2 ┆ ok │
│ 3 ┆ ok │
└──────────┴──────────┘
</code></pre>
<p>but with <code>pl.Object</code> columns, it does not work:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame([[1, 2, 3]])
df.with_columns(pl.lit("ok", dtype=pl.Object).alias("metadata"))
# InvalidOperationError: casting from Utf8View to FixedSizeBinary(8) not supported
</code></pre>
<p>using one-element <code>pl.Series</code> does not work either:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.Series(["ok"], dtype=pl.Object).alias("metadata"))
# InvalidOperationError: Series metadata, length 1 doesn't
# match the DataFrame height of 3
# If you want expression: Series[metadata] to be broadcasted,
# ensure it is a scalar (for instance by adding '.first()').
</code></pre>
<p>It seems that I need either to create a <code>pl.Series</code> of correct length manually (like <code>pl.Series(["ok"] * df.height, dtype=pl.Object)</code>, or do a cross-join like this:</p>
<pre class="lang-py prettyprint-override"><code>df.join(pl.Series(["ok"], dtype=pl.Object).to_frame("metadata"), how="cross")
</code></pre>
<p>It works, but is not very elegant. Are there any better solutions?</p>
<p>NB. I used a string object just as an example. I really need <code>pl.Object</code> column to store various heterogeneous data, not strings, and cannot use, say, <code>pl.Struct</code> instead.</p>
|
<python><dataframe><python-polars>
|
2025-10-28 13:07:20
| 2
| 8,187
|
Ilya V. Schurov
|
79,802,787
| 482,717
|
Analyze emails using Apple Intelligence on macOS 26 (Tahoe)
|
<p>How to access Mail app emails on my MacBook with maOS 26.0.1. using Python and Apple Intelligence?</p>
<p>I need to generate a report based on email Sender, Topic, and Content.</p>
|
<python><macos><apple-intelligence>
|
2025-10-28 12:23:10
| 0
| 64,604
|
Paul Verest
|
79,802,758
| 1,230,477
|
Failure to load Auth Cookies by Playwright into browser context
|
<p>With Camoufox server and browser I manually create a browser context and save it as a state file (cookies, incl. auth cookies).<br />
See the file:<br />
<a href="https://i.sstatic.net/19wgtOm3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19wgtOm3.png" alt="enter image description here" /></a></p>
<p>But as I try to load the file into a new server & browser, the Playwright fails to load auth cookies into browser context.</p>
<p>From AI:<br />
<strong>Root Cause Identified</strong><br />
The problem is that Playwright is rejecting the site auth cookies during the loading process, even though they exist in session file. This is a common issue with <code>__Host-</code> prefixed cookies and <code>domain/path</code> constraints.</p>
<p>Domain specification:</p>
<ul>
<li>The __Host- cookies require exact domain matching</li>
<li>Secure flag: All __Host- cookies must have secure=True</li>
<li>Path requirement: __Host- cookies must have path=/</li>
<li>Navigation timing: Cookies might need to be set after page navigation</li>
</ul>
<p>I made the fixes above but the auth cookies still are not properly added to the context.</p>
<p><strong>Any way out ?</strong></p>
|
<python><cookies><playwright><camoufox>
|
2025-10-28 11:51:26
| 0
| 6,351
|
Igor Savinkin
|
79,802,659
| 3,535,147
|
Trying to solve a pip-compile stack trace error
|
<p>When I run pip-compile I get an error telling me that</p>
<pre><code>AttributeError: 'InstallRequirement' object has no attribute 'use_pep517'
</code></pre>
<p>I am using</p>
<ul>
<li>Python 3.11.6</li>
<li>pip v25.3</li>
<li>pip-compile v7.5.1</li>
</ul>
<p>From other threads, I suspect that there is a compatibility issue between pip and pip-compile versions, but I couldn't find any information on the compatibility mix.</p>
<p>I have tried a number of things to resolve this</p>
<ul>
<li>Clearing down my pyenv virtualenvs and rebuilding</li>
<li>Upgrading to Python 3.11.14</li>
<li>Upgrading to Python 3.13.8</li>
<li>Reducing the <code>requirements.in</code> file down to one library rather than everything the app needs. I have tried each library in turn as an individual.</li>
</ul>
<p>Nothing I have done has had any effect. I can't think what else I can do to resolve this and should be grateful for any help.</p>
|
<python><pip><pip-compile>
|
2025-10-28 10:27:32
| 1
| 303
|
user3535147
|
79,802,641
| 17,472,988
|
Micromamba crashes with null pointer dereference error
|
<p>When attempting to execute Micromamba</p>
<pre class="lang-none prettyprint-override"><code>micromamba.exe --version
</code></pre>
<p>it crashes with the null pointer dereference error:</p>
<p><a href="https://i.sstatic.net/Yjg9Iix7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yjg9Iix7.jpg" alt="Micromamba execution error" /></a></p>
<hr />
<p><strong>Key Environment Details</strong></p>
<p><strong>OS</strong>: Windows 10 LTSC 2019.<br />
<strong>Micromamba</strong>: 2.3.3.</p>
<hr />
<p>The same command and file work normally on Windows 10 LTSC 2021.</p>
|
<python><anaconda><conda><mamba><micromamba>
|
2025-10-28 10:16:46
| 1
| 1,859
|
PChemGuy
|
79,802,572
| 6,265,620
|
How to read uploaded file by LangGraph
|
<p>I am following <a href="https://docs.langchain.com/oss/python/langchain/context-engineering#messages" rel="nofollow noreferrer">Context engineering in agents - Docs</a> by LangChain which reads the uploaded files in a call to <code>wrap_model</code>. This is</p>
<p>The code, copied from the linked document:</p>
<pre class="lang-py prettyprint-override"><code>from langchain.agents import create_agent
from langchain.agents.middleware import wrap_model_call, ModelRequest, ModelResponse
from typing import Callable
@wrap_model_call
def inject_file_context(
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse]
) -> ModelResponse:
"""Inject context about files user has uploaded this session."""
# Read from State: get uploaded files metadata
uploaded_files = request.state.get("uploaded_files", [])
if uploaded_files:
# Build context about available files
file_descriptions = []
for file in uploaded_files:
file_descriptions.append(
f"- {file['name']} ({file['type']}): {file['summary']}"
)
file_context = f"""Files you have access to in this conversation:
{chr(10).join(file_descriptions)}
Reference these files when answering questions."""
# Inject file context before recent messages
messages = [
*request.messages
{"role": "user", "content": file_context},
]
request = request.override(messages=messages)
return handler(request)
agent = create_agent(
model="openai:gpt-4o",
tools=[...],
middleware=[inject_file_context]
)
</code></pre>
<p>I tested this agent with LangSmith Studio, but <code>request.state.get("uploaded_files", [])</code> always returns <code>[]</code> when I upload files from Studio.</p>
<p>How can I read the files that I uploaded?</p>
|
<python><langchain><langgraph><langsmith>
|
2025-10-28 08:51:20
| 1
| 30,226
|
Edward
|
79,802,430
| 188,331
|
Transformers with Python 3.12.3 produce lots of errors
|
<p>I got Python 3.12.3 on an Ubuntu server. I tried to install <code>transformers</code>, <code>tokenizers</code>, <code>datasets</code> and <code>accelerate</code> to use the <code>Seq2SeqTrainer</code> in the <code>transformers</code>.</p>
<p>I used a virtual environment for the installation, ensuring the installation of the packages won't affect the original system. After I activated the virtual environment, I installed the above packages via:</p>
<pre><code>pip install transformers tokenizers accelerate --upgrade
</code></pre>
<p>which installed the following versions: accelerate 1.11.0 transformers 4.57.1 tokenizers 0.22.0 (latest version at the moment of writing).</p>
<p>The minimal testing codes are as follows:</p>
<pre><code>from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer,
BartForConditionalGeneration,
BertTokenizer,
DataCollatorForSeq2Seq,
EarlyStoppingCallback,
EncoderDecoderModel,
IntervalStrategy,
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
Text2TextGenerationPipeline
)
from datasets import Dataset, DatasetDict, load_dataset, disable_progress_bar
import evaluate
import numpy as np
default_dataset = "raptorkwok/cantonese-traditional-chinese-parallel-corpus-gen3"
base_model_name = "fnlp/bart-base-chinese"
canton_ds = load_dataset(default_dataset)
yuezh_train = canton_ds["train"]
yuezh_test = canton_ds["test"]
yuezh_val = canton_ds["validation"]
print("Train Dataset Count: ", len(yuezh_train))
print("Test Dataset Count: ", len(yuezh_test))
print("Validation Dataset Count: ", len(yuezh_val))
yuezh_master = DatasetDict({"train": yuezh_train, "test": yuezh_test, "val": yuezh_val})
base_tokenizer = BertTokenizer.from_pretrained(base_model_name)
base_model = BartForConditionalGeneration.from_pretrained(base_model_name, output_hidden_states = True)
# ======================================================================================
# Process Dataset and Tokenization
# ======================================================================================
print("Training: Process Dataset and Tokenization")
def _filter_valid_examples(example):
return (
isinstance(example["yue"], str) and example["yue"].strip() and
isinstance(example["zh"], str) and example["zh"].strip()
)
def _preprocess_dataset(examples):
inputs = [text for text in examples["yue"]]
targets = [text for text in examples["zh"]]
model_inputs = base_tokenizer(inputs, text_target=targets, max_length=550, truncation=True)
return model_inputs
def _postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
# Filter with valid examples only
filtered_yuezh_master = yuezh_master.filter(_filter_valid_examples)
# Tokenization
tokenized_yuezh_master = filtered_yuezh_master.map(_preprocess_dataset, batched=True)
# remove unused columns
tokenized_yuezh_master = tokenized_yuezh_master.remove_columns(yuezh_train.column_names)
metric_bleu = evaluate.load("sacrebleu")
metric_chrf = evaluate.load("chrf")
data_collator = DataCollatorForSeq2Seq(tokenizer=base_tokenizer, model=base_model)
def _compute_metrics(eval_preds): # For Trainer
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = base_tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, base_tokenizer.pad_token_id)
decoded_labels = base_tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_preds, decoded_labels = _postprocess_text(decoded_preds, decoded_labels)
result_bleu = metric_bleu.compute(predictions=decoded_preds, references=decoded_labels, tokenize='zh')
result_chrf = metric_chrf.compute(predictions=decoded_preds, references=decoded_labels, word_order=2)
results = {"bleu": result_bleu["score"], "chrf": result_chrf["score"]}
prediction_lens = [np.count_nonzero(pred != base_tokenizer.pad_token_id) for pred in preds]
results["gen_len"] = np.mean(prediction_lens)
results = {k: round(v, 4) for k, v in results.items()}
return results
model_path = "test_minimal"
batch_size = 8
num_epochs = 1
training_args = Seq2SeqTrainingArguments(
output_dir = model_path,
evaluation_strategy = IntervalStrategy.STEPS,
logging_strategy = "no",
optim = "adamw_torch",
eval_steps = 10000,
save_steps = 10000,
learning_rate = 2e-5,
per_device_train_batch_size = batch_size,
per_device_eval_batch_size = batch_size,
weight_decay = 0.01,
save_total_limit = 1,
num_train_epochs = num_epochs,
predict_with_generate=True,
remove_unused_columns=True,
fp16 = True,
push_to_hub = False,
metric_for_best_model = "bleu",
load_best_model_at_end = True,
report_to = "wandb"
)
trainer = Seq2SeqTrainer(
model = base_model,
args = training_args,
train_dataset = tokenized_yuezh_master['train'],
eval_dataset = tokenized_yuezh_master['val'],
tokenizer = base_tokenizer,
data_collator = data_collator,
compute_metrics = _compute_metrics,
)
trainer.train()
</code></pre>
<p>The following error appears:</p>
<blockquote>
<p>AttributeError: <code>AcceleratorState</code> object has no attribute <code>distributed_type</code>. This happens if <code>AcceleratorState._reset_state()</code> was called and an <code>Accelerator</code> or <code>PartialState</code> was not reinitialized.</p>
</blockquote>
<p>My codes did not use the Accelerator; the Transformers internal codes use it. After searching on the Internet, I found out it is caused by version issues. Then I tried the following version combinations (all I could find on the Internet):</p>
<p><strong>accelerate 1.4.0 transformers 4.37.2 tokenizers 0.15.2</strong></p>
<blockquote>
<p>TypeError: Accelerator.<strong>init</strong>() got an unexpected keyword argument 'dispatch_batches'</p>
</blockquote>
<p><strong>accelerate 0.15.0 transformers 4.35.2 tokenizers 0.14.0</strong>
<strong>accelerate 0.28.0 transformers 4.36.0 tokenizers 0.15.2</strong></p>
<blockquote>
<p>AttributeError: 'AcceleratorState' object has no attribute 'distributed_type'</p>
</blockquote>
<p><strong>My question:</strong> How do I run the train codes without errors in Python 3.12.3? Ideally, use the latest possible version to minimize the chance of encountering bugs.</p>
|
<python><huggingface-transformers>
|
2025-10-28 04:35:06
| 0
| 54,395
|
Raptor
|
79,802,360
| 962,844
|
Superset Guest Token
|
<p>I'm trying to access an Apache Superset dashboard using a guest token, but no matter what configuration I change, Superset always redirects to the login page instead of displaying the embedded dashboard.</p>
<p><strong>Setup:</strong></p>
<p>I plan to deploy Superset to a production server later, but for testing purposes, I’m currently running it inside a Docker container with Linux and PostgreSQL.
Then, I use a Python script to connect to the Superset API and generate the guest token.</p>
<p><strong>Step to reproduce:</strong></p>
<ol>
<li>Dockerfile:</li>
</ol>
<pre><code># Usa a imagem base do Python
FROM python:3.10-slim
# Define variáveis de ambiente básicas
ENV LANG=C.UTF-8 \
LC_ALL=C.UTF-8 \
PYTHONUNBUFFERED=1 \
SUPERSET_HOME=/app/superset_home \
SQLALCHEMY_DATABASE_URI=postgresql://superset:superset@localhost:5432/superset \
PGDATA=/var/lib/postgresql/data \
POSTGRES_USER=superset \
POSTGRES_PASSWORD=superset \
POSTGRES_DB=superset
WORKDIR /app
# Instala dependências do sistema
RUN apt-get update && apt-get install -y \
build-essential \
libssl-dev \
libffi-dev \
libpq-dev \
libmariadb-dev \
libmariadb-dev-compat \
libsasl2-dev \
python3-dev \
libldap2-dev \
libcurl4-openssl-dev \
curl \
nano \
postgresql \
postgresql-contrib \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
# Atualiza pip e instala marshmallow compatível
RUN pip install --no-cache-dir --upgrade pip setuptools wheel
RUN pip install "marshmallow>=3.22.0,<4.0"
RUN pip install psycopg2-binary
# Instala o Apache Superset 6.0.0rc2
RUN pip install apache-superset==6.0.0rc2
# Configura diretórios e variáveis
RUN mkdir -p $SUPERSET_HOME/uploads
ENV SUPERSET_CONFIG_PATH=/app/pythonpath/superset_config.py
# Cria configuração básica com SECRET_KEY
RUN mkdir -p /app/pythonpath
RUN echo "SECRET_KEY = '$(openssl rand -base64 42)'" > /app/pythonpath/superset_config.py
RUN echo "GUEST_TOKEN_JWT_SECRET = '$(openssl rand -base64 42)'" >> /app/pythonpath/superset_config.py
RUN echo "FEATURE_FLAGS = {'EMBEDDED_SUPERSET': True}" >> /app/pythonpath/superset_config.py
RUN echo "GUEST_TOKEN_JWT_ALGO = 'HS256'" >> /app/pythonpath/superset_config.py
RUN echo "GUEST_TOKEN_JWT_EXP_SECONDS = 3600" >> /app/pythonpath/superset_config.py
RUN echo "GUEST_ROLE_NAME = 'Gamma'" >> /app/pythonpath/superset_config.py
RUN echo "GUEST_TOKEN_JWT_AUDIENCE = 'http://localhost:8088'" >> /app/pythonpath/superset_config.py
RUN echo "CSV_EXTENSIONS = ['csv']" >> /app/pythonpath/superset_config.py
RUN echo "UPLOAD_FOLDER = '/app/superset_home/uploads'" >> /app/pythonpath/superset_config.py
RUN echo "ALLOW_FILE_UPLOAD = True" >> /app/pythonpath/superset_config.py
RUN echo "SQLALCHEMY_ENGINE_OPTIONS = {'pool_pre_ping': True}" >> /app/pythonpath/superset_config.py
RUN echo "PUBLIC_ROLE_LIKE = 'Gamma'" >> /app/pythonpath/superset_config.py
RUN echo "WTF_CSRF_ENABLED = False" >> /app/pythonpath/superset_config.py
RUN echo "LOG_LEVEL = 'DEBUG'" >> /app/pythonpath/superset_config.py
RUN echo "ENABLE_GUEST_TOKEN = True" >> /app/pythonpath/superset_config.py
RUN echo "EMBEDDED_SUPERSET = True" >> /app/pythonpath/superset_config.py
RUN echo "CORS_ORIGINS = ['*']" >> /app/pythonpath/superset_config.py
RUN echo "HTTP_HEADERS = {'X-Frame-Options': 'ALLOWALL'}" >> /app/pythonpath/superset_config.py
# Inicializa o PostgreSQL
RUN service postgresql start && \
su - postgres -c "psql -c \"CREATE USER superset WITH PASSWORD 'superset';\"" && \
su - postgres -c "createdb -O superset superset" && \
service postgresql stop
# Inicializa banco e usuário admin
ENV FLASK_APP=superset.app:create_app()
RUN service postgresql start && \
superset db upgrade && \
superset fab create-admin \
--username admin \
--firstname Admin \
--lastname User \
--email admin@superset.com \
--password admin && \
superset init && \
service postgresql stop
# Exposição da porta padrão
EXPOSE 8088
# Script para iniciar PostgreSQL e Superset
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
# Comando final
CMD ["/entrypoint.sh"]
</code></pre>
<ol start="2">
<li>entrypoint.sh:</li>
</ol>
<pre><code>#!/bin/bash
# Inicia o PostgreSQL
service postgresql start
# Aguarda até que o PostgreSQL esteja pronto
until pg_isready -U superset; do
echo "Aguardando o PostgreSQL estar pronto..."
sleep 2
done
# Inicia o Superset
exec superset run --host 0.0.0.0 --port 8088
</code></pre>
<ol start="3">
<li>superset_config.py:</li>
</ol>
<pre><code>SECRET_KEY = 'kvt7C57KoZu6ht77KIkCA6aFdRJpET+Hjec4JQPRTwNuTONmlMslzu+Y'
GUEST_TOKEN_JWT_SECRET = 'Fd4wYsik8B/fRGgKCPBk0Wn+QptdciDQLKnN89THyoCLXzIn6wvRbGh4'
FEATURE_FLAGS = {'EMBEDDED_SUPERSET': True}
GUEST_TOKEN_JWT_ALGO = 'HS256'
GUEST_TOKEN_JWT_EXP_SECONDS = 3600
GUEST_ROLE_NAME = 'Gamma'
GUEST_TOKEN_JWT_AUDIENCE = 'http://localhost:8088'
CSV_EXTENSIONS = ['csv']
UPLOAD_FOLDER = '/app/superset_home/uploads'
ALLOW_FILE_UPLOAD = True
SQLALCHEMY_ENGINE_OPTIONS = {'pool_pre_ping': True}
PUBLIC_ROLE_LIKE = 'Gamma'
WTF_CSRF_ENABLED = False
LOG_LEVEL = 'DEBUG'
ENABLE_GUEST_TOKEN = True
EMBEDDED_SUPERSET = True
CORS_ORIGINS = ['*']
HTTP_HEADERS = {'X-Frame-Options': 'ALLOWALL'}
</code></pre>
<ol start="4">
<li>Creating the volume and container:</li>
</ol>
<pre><code>docker build -t superset_6_linux:latest .
docker run -d -p 8088:8088 --name superset6 superset_6_linux:latest
docker start -ai superset6
</code></pre>
<ol start="5">
<li><p>Log in to Superset, grant all permissions to the Gamma role, create a dashboard, enable the “Allow Embedding” option, and publish it.</p>
</li>
<li><p>Run the following Python script to generate a guest token:</p>
</li>
</ol>
<pre><code>import requests
import json
import time
SUPERSET_URL = "http://localhost:8088" # Ajuste para sua URL
USERNAME = "admin" # Ajuste
PASSWORD = "admin" # Ajuste
DASHBOARD_ID = 1 # Ajuste para o ID do seu dashboard
session = requests.Session()
# Passo 1: Login para obter o JWT (access_token)
login_url = f"{SUPERSET_URL}/api/v1/security/login"
login_payload = {
"username": USERNAME,
"password": PASSWORD,
"provider": "db",
"refresh": False
}
login_response = session.post(login_url, json=login_payload)
if login_response.status_code != 200:
raise ValueError(f"Falha no login: {login_response.status_code} - {login_response.text}")
login_data = login_response.json()
jwt_token = login_data["access_token"]
print("")
print(login_data)
print(f"JWT obtido: {jwt_token}")
# Passo 2: Obter o token CSRF
csrf_url = f"{SUPERSET_URL}/api/v1/security/csrf_token/"
csrf_headers = {
'Authorization': f'Bearer {jwt_token}',
'Content-Type': 'application/json'
}
csrf_response = session.get(csrf_url, headers=csrf_headers)
if csrf_response.status_code != 200:
raise ValueError(f"Falha ao obter CSRF: {csrf_response.status_code} - {csrf_response.text}")
csrf_data = csrf_response.json()
csrf_token = csrf_data["result"]
print("")
print(csrf_data)
print(f"CSRF token obtido: {csrf_token}")
# Passo 3: Verificar permissões da role Guest (para depuração)
roles_url = f"{SUPERSET_URL}/api/v1/security/roles"
headers = {
'Authorization': f'Bearer {jwt_token}',
'Content-Type': 'application/json'
}
roles_response = session.get(roles_url, headers=headers)
roles_data = roles_response.json()
print("")
print(roles_data)
# dados do dashboard
dashboard_url = f"{SUPERSET_URL}/api/v1/dashboard/{DASHBOARD_ID}"
headers = {
'Authorization': f'Bearer {jwt_token}',
'Content-Type': 'application/json'
}
dashboard_response = session.get(dashboard_url, headers=headers)
dashboard_data = dashboard_response.json()
print("")
print(dashboard_data)
# Passo 5: Gerar um guest token para o dashboard
guest_token_url = f"{SUPERSET_URL}/api/v1/security/guest_token/"
guest_token_payload = {
"user": {
"username": "convidado",
},
"resources": [{
"type": "dashboard",
"id": str(DASHBOARD_ID)
}],
"rls": [], # Regras de segurança row-level, se necessário
"exp": int(time.time()) + 3600 # Expiração em 1 hora
}
headers['X-CSRF-Token'] = csrf_token
guest_token_response = session.post(guest_token_url, json=guest_token_payload, headers=headers)
if guest_token_response.status_code != 200:
raise ValueError(f"Falha ao obter guest token: {guest_token_response.status_code} - {guest_token_response.text}")
guest_data = guest_token_response.json()
guest_token = guest_data["token"]
print("")
print(guest_data)
print(f"Guest token obtido: {guest_token}")
# Passo 6: Construir a URL de redirecionamento para o dashboard
redirect_url = f"{SUPERSET_URL}/superset/dashboard/{DASHBOARD_ID}/?token={guest_token}"
print("")
print(redirect_url)
print("")
# ...
headers = {}
headers['X-CSRF-Token'] = csrf_token
redirect_url = f"{SUPERSET_URL}/superset/dashboard/{DASHBOARD_ID}/?token={jwt_token}"
foo_response = session.get(redirect_url, headers=headers)
print("")
print(foo_response)
print("")
session.close()
</code></pre>
<ol start="7">
<li><p>Copy the generated embedded URL and open it in a browser.</p>
</li>
<li><p>Superset redirects to the login page instead of showing the dashboard.</p>
</li>
</ol>
<p>P.S. I've tried versions 4.0.0 and 5.0.0 but no luck!</p>
|
<python><docker><apache-superset>
|
2025-10-28 01:36:36
| 0
| 2,678
|
Mateus
|
79,802,146
| 10,034,073
|
Python catch ImportError without catching transitive errors raised by the module
|
<p>In Python, when an <code>import</code> fails, how can I differentiate between:</p>
<ul>
<li>The module doesn't exist.</li>
<li>The module exists, but it tried importing another module that didn't exist.</li>
</ul>
<hr />
<h1>Example</h1>
<pre class="lang-py prettyprint-override"><code># ./first.py
try:
import second
except ImportError:
print("second.py doesn't exist")
</code></pre>
<pre class="lang-py prettyprint-override"><code># ./second.py
import third # ModuleNotFoundError: No module named "third"
# Do stuff...
def foo():
...
</code></pre>
<pre><code>>>> import first
second.py doesn't exist
</code></pre>
<p>The error message printed in this example is incorrect. <code>second.py</code> <em>does</em> exist, and the <code>ImportError</code> is actually due to <code>second.py</code> itself containing an invalid import.</p>
<p>In this case, I want all transitive errors in <code>second.py</code> to propagate un-caught. The <strong>only</strong> exception I want to catch is the case where there is no file <code>second.py</code> to import.</p>
<hr />
<p>This paradigm <a href="https://softwareengineering.stackexchange.com/questions/262697/is-it-safe-to-catch-importerror-when-trying-to-import-optional-modules">has been discussed</a> on Software Engineering SE but without a method for differentiation.</p>
<p><em>Yes, I am fully aware that this is a strange situation and probably smells like an xy-problem. This is all for some temporary testing, where <code>second.py</code> is a script I'm creating and deleting as I test things: not strictly necessary, but now I'm interested in the question theoretically.</em></p>
|
<python><python-import><importerror>
|
2025-10-27 18:40:28
| 2
| 444
|
kviLL
|
79,802,076
| 21,826,195
|
Why do I get a SettingWithCopyWarning when using shift and dropna inside a function?
|
<p>In general, when I receive this warning</p>
<pre><code>/home/mo/mwe.py:7: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
</code></pre>
<p>I head to the <a href="https://stackoverflow.com/a/53954986/21826195">second answer</a> on <a href="https://stackoverflow.com/questions/20625582/how-to-deal-with-settingwithcopywarning-in-pandas">How to deal with SettingWithCopyWarning in Pandas</a> and try to reduce my code to one of the presented examples.</p>
<p>This time, however, I'm stuck with this not-so-minimal MWE:</p>
<pre><code>import pandas as pd
def edit_df(df):
df["shifted"] = df["base"].shift(-1)
df["diff"] = df["shifted"] - df["base"]
df = df.dropna()
df["match"] = df["diff"] > 1
return df
def main():
df = pd.DataFrame({'base': [1,2]})
df = edit_df(df)
print(df)
main()
</code></pre>
<p>I tried to minimize it further, but the warning disappears when I remove any of the blocks or paste the function's code into main. Hence, my assumption is that the warning is caused of a combination of the operations. But it eludes me, why this would be the case. From this <a href="https://stackoverflow.com/questions/31614011/dataframe-modified-inside-a-function">question</a> I assume, I'm always working on the original dataframe, as intended.</p>
<p>From my understanding, I probably do a slicing somewhere, so I tried to place <code>loc[: , 'column_name']</code> everywhere I assume slicing happens (<code>edit_df_loc</code> below), like the docs suggest. It does the modifications, but it still shows the warning.</p>
<p>Using <code>dropna().copy()</code> or <code>dropna(inplace=True)</code> makes the warning disappear. But I don't know, why I'd want to copy the dataframe (do I have to?) and <code>inplace</code> <a href="https://stackoverflow.com/questions/45570984/in-pandas-is-inplace-true-considered-harmful-or-not">shouldn't be used</a>.</p>
<p>Why do I face the warning and how do I properly fix it?</p>
<hr />
<p>pandas version 2.3.3</p>
<p>I could well be missing terminology, so pointing me to a duplicate-target that explains the situation is also highly appreciated.</p>
<p>For reference, here are some of the variations that don't produce a warning and my attempt to use <code>loc[]</code>.
I'm constructing a new dataframe every time, so there shouldn't be any slice upstream, as suggested <a href="https://stackoverflow.com/questions/42379818/correct-way-to-set-new-column-in-pandas-dataframe-to-avoid-settingwithcopywarnin">here</a>.</p>
<pre><code>import pandas as pd
def edit_df(df):
df["shifted"] = df["base"].shift(-1)
df["diff"] = df["shifted"] - df["base"]
df = df.dropna()
df["match"] = df["base"] > 1
return df
def edit_df1(df):
df = df.dropna()
df["match"] = df["base"] > 1
return df
def edit_df2(df):
df["shifted"] = df["base"].shift(-1)
df["diff"] = df["shifted"] - df["base"]
df = df.dropna()
return df
def edit_df3(df):
df["shifted"] = df["base"].shift(-1)
df["diff"] = df["shifted"] - df["base"]
df["match"] = df["base"] > 1
return df
def edit_df_copy(df):
df["shifted"] = df["base"].shift(-1)
df["diff"] = df["shifted"] - df["base"]
df = df.dropna().copy()
df["match"] = df["base"] > 1
return df
def edit_df_loc(df):
df.loc[:, "shifted"] = df.loc[:, "base"].shift(-1)
df.loc[:, "diff"] = df.loc[:, "shifted"] - df.loc[:, "base"]
df = df.dropna()
df.loc[:, "match"] = df.loc[:, "base"] > 1
return df
def main():
df = pd.DataFrame({'base': [1,2]})
df = edit_df_copy(df)
df = pd.DataFrame({'base': [1,2]})
df = edit_df1(df)
df = pd.DataFrame({'base': [1,2]})
df = edit_df2(df)
df = pd.DataFrame({'base': [1,2]})
df = edit_df3(df)
print(df)
main()
</code></pre>
|
<python><pandas><dataframe><pandas-settingwithcopy-warning>
|
2025-10-27 17:07:10
| 3
| 2,028
|
Mo_
|
79,802,064
| 1,200,914
|
Lock a DynamoDB table on reading/writing operations
|
<p>I have a DynamoDB table with entries that can have a status (waiting, error, running...). Only up to 25 entries can have running status.</p>
<p>My objective is to have an AWS Lambda function that checks if there are fewer than 25 entries with "running" status, and if so, retrieve the next N entries that have value "waiting" and set them to "running".</p>
<p>My problem is that my Lambda function can run simultaneously with other AWS lambdas, and I guess I need some kind of locking while reading/writing to the table. I've found this library to lock the table while working on it <a href="https://python-dynamodb-lock.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">https://python-dynamodb-lock.readthedocs.io/en/latest/index.html</a>, but:</p>
<ol>
<li>I don't know if it locks on reading operations too</li>
<li>It seems a dead library (last update was last year)</li>
</ol>
<p>Do you know if this library does locking on reading too? or alternative methods? (I'd be ok by moving from DynamoDB to another kind of system)</p>
|
<python><amazon-web-services><aws-lambda><amazon-dynamodb>
|
2025-10-27 16:42:29
| 1
| 3,052
|
Learning from masters
|
79,801,880
| 3,997,262
|
How to call custom embedding specifically in litellm?
|
<p>I am trying to implement a custom llm router and have this in my <code>.yaml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>model_list:
- model_name: "text-embedding-3-small"
litellm_params:
model: "custom_llm_router/text-embedding-3-small"
...
</code></pre>
<p>in my py code:</p>
<pre class="lang-py prettyprint-override"><code>from litellm import CustomLLM
from litellm.types.utils import ModelResponse
from litellm.utils import EmbeddingResponse
...
class CustomChat(CustomLLM):
def __init__(self):
...
def embedding(self, *args, **kwargs) -> EmbeddingResponse:
...
def aembedding(self, *args, **kwargs) -> EmbeddingResponse:
...
</code></pre>
<p>The completion call, hit the <code>acompletion</code> method, however, embedding calls do not hit <code>aembedding</code> or <code>embedding</code>.</p>
<p>Calling/ client:</p>
<pre class="lang-py prettyprint-override"><code>response = client.embeddings.create(
model= "text-embedding-3-small",
input=["Hello World"]
)
</code></pre>
<p>I see this error:</p>
<pre class="lang-bash prettyprint-override"><code>LiteLLM: Proxy initialized with Config, Set models:
text-embedding-3-small
custom-gpt-5
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
...
...
...
litellm.exceptions.UnsupportedParamsError: litellm.UnsupportedParamsError: Setting {'encoding_format': 'base64'} is not supported by custom_llm_router. To drop it from the call, set `litellm.drop_params = True`.. Received Model Group=text-embedding-3-small
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2
INFO: 127.0.0.1:52405 - "POST /embeddings HTTP/1.1" 400 Bad Request
</code></pre>
<p>Any hints please?</p>
|
<python><litellm>
|
2025-10-27 13:42:18
| 0
| 592
|
programmer
|
79,801,782
| 4,393,951
|
Python execution log with call stack
|
<p>I have complicated Python code which I wish to trace. I want to have a log file that will list all function calls in the order in which they are called, and for each call:</p>
<ul>
<li>the current call stack</li>
<li>parameters passed to the function</li>
<li>return values</li>
</ul>
<p>Doing this by inspecting the code is hard because some of the branching is unknown until the code actaully runs, because there are conditionals which depend on external inputs profided at run time. Thus we need an actual trace.</p>
<p>I do not want to modify the code. The functionality should be extenal to the code.</p>
<p>Can you suggest a solution to my problem?</p>
|
<python><trace>
|
2025-10-27 11:47:09
| 1
| 499
|
Yair M
|
79,801,771
| 244,297
|
How to detect redundant assignments with Python linters?
|
<p>Consider this small function:</p>
<pre><code>def test():
x = 1
x = 2
x = 3
return x + 1
</code></pre>
<p>Apparently, the first two assignments to <code>x</code> have no effect here and can be removed. Yet surprisingly, <code>pylint</code>/<code>flake8</code>/<code>ruff</code> don't produce any warnings about it (at least with the default config). Is there any particular reason for this?</p>
|
<python><static-analysis>
|
2025-10-27 11:39:07
| 1
| 151,764
|
Eugene Yarmash
|
79,801,475
| 10,339,757
|
Convert existing dataset to rioxarray object
|
<p>I have a dataset that I need to convert into a rioxarray dataset so I can use regridding features but I can't work out how to convert an already existing xarray object to rioxarray.</p>
<p>Unfortunately I can't just load the object in as an rioxarray object as the file is a csv that I convert from a pandas object to an xarray dataset.</p>
<pre><code>df = pd.read_csv('file.csv')
df=df.set_index(["date", "latitude", "longitude"])
ds=df.to_xarray()
</code></pre>
|
<python><geospatial><python-xarray>
|
2025-10-27 05:27:11
| 2
| 371
|
thefrollickingnerd
|
79,800,033
| 307,050
|
How to asynchronously push state from backend to frontend
|
<p>I'm trying to build a simple app using <a href="https://reflex.dev/" rel="nofollow noreferrer">Reflex</a>.</p>
<p>My use case is different from most of the examples shown in the docs.</p>
<p>I need to constantly receive data from a UDP socket, parse the contents and then push certain values from the backend to the frontend.</p>
<p>My receiver and parser work fine, however, I cannot push updates from backend to frontend.</p>
<pre class="lang-py prettyprint-override"><code>import threading
from typing import Callable
import reflex as rx
from rxconfig import config
import socket
from .forza_dash import ForzaDash
class UDPReceiver:
type Callback = Callable[[bytes], None]
sock: socket = None
subscriber: Callback = None
running: bool = False
def __init__(self, port: int):
self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
addr = ('0.0.0.0', port)
self.sock.bind(addr)
def receive_loop(self):
while self.running:
message, _ = self.sock.recvfrom(1024)
if self.subscriber != None:
self.subscriber(message)
def start(self):
if self.running:
return
self.running = True
threading.Thread(target=self.receive_loop).start()
def stop(self):
self.running = False
def subscribe(self, callback: Callback):
self.subscriber = callback
class State(rx.State):
# UDP receiver backend only
_udp_receiver: UDPReceiver = None
# last update backend only
_last_update: ForzaDash = None
# timestamp
tstamp: int = -1
def __init__(self, *args, **kwargs):
print("State __init__")
super().__init__(*args, **kwargs)
self._udp_receiver = UDPReceiver(5300)
self._udp_receiver.subscribe(self.on_data)
def start_receiver(self):
self._udp_receiver.start()
def on_data(self, data: bytes):
print(f"State got {len(data)} bytes")
if len(data) == ForzaDash.size():
# parse and map desired values
update = ForzaDash(data)
self._last_update = update
self.tstamp = update.tod
@rx.var
def timestamp(self) -> int:
return self.tstamp
#return self._last_update.tod
@rx.page(on_load=State.start_receiver)
def index() -> rx.Component:
return rx.container(
rx.vstack(
rx.text(
f"time: {State.tstamp}"
),
),
)
app = rx.App()
app.add_page(index)
</code></pre>
<p>When I start the app, my receiver thread starts, receives data, the bytes are parsed and the values are correct. However, the <code>timestamp</code> <code>@rx.var</code> is not updated at all. It remains at its internal value of <code>-1</code>.</p>
<p>Is there any way to force or trigger an update of states from backend to frontend?</p>
|
<python><python-reflex>
|
2025-10-26 12:04:27
| 1
| 1,347
|
mefiX
|
79,800,025
| 10,490,375
|
Build numpy 2.3+ without accelerated libraries
|
<p>Related post: <a href="https://stackoverflow.com/questions/32228967/compile-numpy-without-intel-mkl-blas-atlas-lapack">Compile numpy WITHOUT Intel MKL/BLAS/ATLAS/LAPACK</a></p>
<p>Recent versions of numpy use <code>meson</code> for build configuration, I can build numpy from source but failed to exclude BLAS/LAPACK/... deps.</p>
<p>Here is what I tried in github workflow:</p>
<pre class="lang-yaml prettyprint-override"><code> - name: Build Numpy no BLAS
run: |
curl -L -o numpy.tar.gz https://github.com/numpy/numpy/releases/download/v2.3.4/numpy-2.3.4.tar.gz
tar -xzf numpy.tar.gz
# patch meson.build to explicit skip blas detection
patch numpy-2.3.4/numpy/meson.build build_tools/numpy_disable_blas.diff
uv pip install meson-python cython build
uv run python -m build --wheel --outdir ~/numpycache --config-setting=setup-args=-Dblas=none --config-setting=setup-args=-Dlapack=none ./numpy-2.3.4
</code></pre>
<p>I patched <code>meson.build</code> because there is no argument that will disable BLAS/LAPACK detection, it looks like the following after patch:</p>
<pre class="lang-none prettyprint-override"><code>...
# numpy-2.3.4/numpy/meson.build
# start from line 108
# First try scipy-openblas, and if found don't look for cblas or lapack, we
# know what's inside the scipy-openblas wheels already.
# if blas_name == 'openblas' or blas_name == 'auto'
# blas = dependency('scipy-openblas', method: 'pkg-config', required: false)
# if blas.found()
# blas_name = 'scipy-openblas'
# endif
# endif
# if blas_name == 'auto'
# foreach _name : blas_order
# if _name == 'mkl'
# blas = dependency('mkl',
# modules: ['cblas'] + blas_interface + mkl_opts,
# required: false, # may be required, but we need to emit a custom error message
# version: mkl_version_req,
# )
# # Insert a second try with MKL, because we may be rejecting older versions
# # or missing it because no pkg-config installed. If so, we need to retry
# # with MKL SDL, and drop the version constraint (this always worked).
# if not blas.found() and mkl_may_use_sdl
# blas = dependency('mkl', modules: ['cblas', 'sdl: true'], required: false)
# endif
# else
# if _name == 'flexiblas' and use_ilp64
# _name = 'flexiblas64'
# endif
# blas = dependency(_name, modules: ['cblas'] + blas_interface, required: false)
# endif
# if blas.found()
# break
# endif
# endforeach
# else
# if blas_name == 'mkl'
# blas = dependency('mkl',
# modules: ['cblas'] + blas_interface + mkl_opts,
# required: false,
# version: mkl_version_req,
# )
# # Same deal as above - try again for MKL
# if not blas.found() and mkl_may_use_sdl
# blas = dependency('mkl', modules: ['cblas', 'sdl: true'], required: false)
# endif
# else
# blas = dependency(blas_name, modules: ['cblas'] + blas_interface, required: false)
# endif
# endif
blas = disabler()
have_blas = false # blas.found()
if have_blas
_args_blas = ['-DHAVE_CBLAS'] # note: used for C and C++ via `blas_dep` below
if use_ilp64
_args_blas += ['-DHAVE_BLAS_ILP64']
if 'openblas' in blas.name()
_args_blas += ['-DOPENBLAS_ILP64_NAMING_SCHEME']
endif
endif
if blas_symbol_suffix == 'auto'
if blas_name == 'scipy-openblas' and use_ilp64
blas_symbol_suffix = '64_'
else
blas_symbol_suffix = blas.get_variable('symbol_suffix', default_value: '')
endif
message(f'BLAS symbol suffix: @blas_symbol_suffix@')
endif
if blas_symbol_suffix != ''
_args_blas += ['-DBLAS_SYMBOL_SUFFIX=' + blas_symbol_suffix]
endif
blas_dep = declare_dependency(
dependencies: [blas],
compile_args: _args_blas,
)
else
if allow_noblas
blas_dep = []
else
error('No BLAS library detected! Install one, or use the ' + \
'`allow-noblas` build option (note, this may be up to 100x slower ' + \
'for some linear algebra operations).')
endif
endif
# if 'mkl' in blas.name() or blas.name() == 'accelerate' or blas_name == 'scipy-openblas'
# # For these libraries we know that they contain LAPACK, and it's desirable to
# # use that - no need to run the full detection twice.
# lapack = blas
# else
# if lapack_name == 'auto'
# foreach _name : lapack_order
# lapack = dependency(_name, modules: ['lapack'] + blas_interface, required: false)
# if lapack.found()
# break
# endif
# endforeach
# else
# lapack = dependency(lapack_name, modules: ['lapack'] + blas_interface, required: false)
# endif
# endif
lapack = disabler()
have_lapack = false # lapack.found()
if not have_lapack and not allow_noblas
error('No LAPACK library detected! Install one, or use the ' + \
'`allow-noblas` build option (note, this may be up to 100x slower ' + \
'for some linear algebra operations).')
else
lapack_dep = declare_dependency(dependencies: [lapack, blas_dep])
endif
...
</code></pre>
<p>The building log shows BLAS related variables are not set, which is exactly what I expected.</p>
<pre class="lang-none prettyprint-override"><code>Configuring __config__.py using configuration
..\numpy\meson.build:445: WARNING: The variable(s) 'BLAS_INCLUDEDIR', 'BLAS_LIBDIR', 'BLAS_OPENBLAS_CONFIG', 'BLAS_PCFILEDIR', 'BLAS_TYPE_NAME', 'BLAS_VERSION', 'LAPACK_INCLUDEDIR', 'LAPACK_LIBDIR', 'LAPACK_OPENBLAS_CONFIG', 'LAPACK_PCFILEDIR', 'LAPACK_TYPE_NAME', 'LAPACK_VERSION' in the input file 'numpy\__config__.py.in' are not present in the given configuration data.
Checking for size of "short" : 2
...
Build targets in project: 104
WARNING: Deprecated features used:
* 1.3.0: {'Source file src/umath/svml/linux/avx512/svml_z0_acos_d_la.s in the 'objects' kwarg is not an object.'}
NumPy 2.3.4
User defined options
Native files: /home/runner/work/STDF-Viewer/STDF-Viewer/numpy-2.3.4/.mesonpy-l6ngii4n/meson-python-native-file.ini
b_ndebug : if-release
b_vscrt : md
blas : none
buildtype : release
lapack : none
Found ninja-1.13.1 at /usr/local/bin/ninja
+ /usr/local/bin/ninja
[1/512] Copying file numpy/__init__.pxd
...
</code></pre>
<p>However, numpy still manages to detect the BLAS, which might be preinstalled in Github runner:</p>
<pre><code># numpy.show_config()
"Build Dependencies": {
"blas": {
"name": "scipy-openblas",
"found": true,
"version": "0.3.30",
"detection method": "pkgconfig",
"include directory": "/opt/_internal/cpython-3.13.8/lib/python3.13/site-packages/scipy_openblas64/include",
warnings.warn("Install `pyyaml` for better output", stacklevel=1)
"lib directory": "/opt/_internal/cpython-3.13.8/lib/python3.13/site-packages/scipy_openblas64/lib",
"openblas configuration": "OpenBLAS 0.3.30 USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64",
"pc file directory": "/project/.openblas"
},
"lapack": {
"name": "scipy-openblas",
"found": true,
"version": "0.3.30",
"detection method": "pkgconfig",
"include directory": "/opt/_internal/cpython-3.13.8/lib/python3.13/site-packages/scipy_openblas64/include",
"lib directory": "/opt/_internal/cpython-3.13.8/lib/python3.13/site-packages/scipy_openblas64/lib",
"openblas configuration": "OpenBLAS 0.3.30 USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell MAX_THREADS=64",
"pc file directory": "/project/.openblas"
}
},
</code></pre>
<p>Is there anything I missed? What is the proper way to disable BLAS detection?</p>
|
<python><numpy><lapack><blas><meson-build>
|
2025-10-26 11:52:01
| 1
| 376
|
nochenon
|
79,799,743
| 19,968,680
|
Will calling Engine.dispose() in a forked process cause errors in another process?
|
<p>When using SQLAlchemy in a forked process, the recommended approach per <a href="https://docs.sqlalchemy.org/en/latest/core/pooling.html#using-connection-pools-with-multiprocessing-or-os-fork" rel="nofollow noreferrer">sqlalchemy documentation</a> (EDIT: originally linked <a href="https://docs.sqlalchemy.org/en/13/core/pooling.html#using-connection-pools-with-multiprocessing-or-os-fork" rel="nofollow noreferrer">1.3 docs</a>) is to call engine.dispose() immediately upon initializing the forked process. This is to prevent child processes from sharing the connection with a parent process. If you are using a Pool object, it will look something like this:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Pool
engine = create_engine("mysql+mysqldb://user:pass@host/dbname")
def run_in_process(some_data_record):
with engine.connect() as conn:
conn.execute(text("..."))
def initializer():
"""ensure the parent proc's database connections are not touched
in the new connection pool"""
engine.dispose(close=False)
with Pool(10, initializer=initializer) as p:
p.map(run_in_process, data)
</code></pre>
<p>However, disposing the entire connection pool seems a bit extreme to me. In my mind, this results in a child process telling the parent process to drop all of its connections. This works just fine in a single-user application, but I am unsure of how calling <code>Engine.dispose()</code> will behave in a web application being accessed by many users.</p>
<p>Let's say I have forked process A, which is currently doing a long-running transaction with my engine. When I create a new process B, and call <code>Engine.dispose()</code>, will this cause disruptions in process A?</p>
|
<python><sqlalchemy><fork><connection-pooling>
|
2025-10-25 21:12:18
| 2
| 322
|
gbiz123
|
79,799,713
| 215,761
|
apache fails to import from correct PYTHONPATH
|
<p>The situation is like this:</p>
<ol>
<li><p>in my <code>httpd.conf</code>, I have</p>
<pre><code>SetEnv PYTHONPATH /Users/<MY_NAME_>/PATH_TO/py3
</code></pre>
</li>
<li><p>my Python script in a browser runs, prints something like a header, and then I print from it the environment like</p>
<pre class="lang-py prettyprint-override"><code>[(x, os.environ [x]) for x in os.environ if "PY" in x]
</code></pre>
</li>
</ol>
<p>I can see this path to my Python stuff printed. It <em>is</em> in the Apache environment.</p>
<p>However the next line in my script,
which is:</p>
<pre><code>from my_module import * # module **is** in the directory which is in PYTHONPATH
</code></pre>
<p>gives an error:</p>
<pre><code>ModuleNotFoundError: No module named '....'
</code></pre>
<p>This doesn't make sense to me.</p>
<p>This is under macOS 13.6.3 Ventura.</p>
|
<python><macos><apache>
|
2025-10-25 19:37:21
| 1
| 320
|
Vlad K.
|
79,799,613
| 2,741,620
|
Horizontal scrollbar on tkinter treeview is not working
|
<p>I am using tkinter, pandas and treeview.
The treeview is to display an excel file via filedialog with 15 columns, which is very lengthy, hence I needed a horizontal scrollbar.</p>
<p>However the scrollbar does not work. The treeview spans the excel data out of the window.</p>
<p>How can I solve this?</p>
<pre><code># Create Excel_Frame for TreeView Excelsheet
Excel_Frame = ttk.Frame(Body_Frame)
Excel_Frame.grid(row=0, column=1)
treeScroll_x = ttk.Scrollbar(Excel_Frame, orient="horizontal")
treeScroll_y = ttk.Scrollbar(Excel_Frame, orient="vertical")
treeScroll_x.pack(side="bottom", fill="x")
treeScroll_y.pack(side="right", fill="y")
treeview = ttk.Treeview(Excel_Frame, show="headings", xscrollcommand=treeScroll_x.set, yscrollcommand=treeScroll_y.set,
height=30)
treeview.pack(side="left", fill="both", expand=True)
treeScroll_x.config(command=treeview.xview)
treeScroll_y.config(command=treeview.yview)
Excel_Frame.grid_columnconfigure(0, weight=1)
Excel_Frame.grid_rowconfigure(0, weight=1)
</code></pre>
<pre><code># Function to load data from Excel to Treeview
def load_data_func():
global filepath
# Clear existing data in Treeview
for item in treeview.get_children():
treeview.delete(item)
# Read Excel File
print("filepath = ", filepath)
selected_sheet = "2020" # Testing specific sheet
excel_file = pd.read_excel(filepath, sheet_name=selected_sheet)
# Set Treeview Header
treeview["columns"] = list(excel_file.columns)
#treeview["show"] = "headings" # Hide the default first column
for col in excel_file.columns:
treeview.heading(col, text=col)
treeview.column(col, width=100, minwidth=150, stretch=False)
# Insert data rows to Treeview
for index, row in excel_file.iterrows():
treeview.insert("", "end", values=list(row))
</code></pre>
<p>This is before uploading excel file</p>
<p><a href="https://i.sstatic.net/9njqLOIK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9njqLOIK.jpg" alt="enter image description here" /></a></p>
<p>And this is after file is uploaded</p>
<p><a href="https://i.sstatic.net/WiFYmUUw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WiFYmUUw.jpg" alt="enter image description here" /></a></p>
<p>This is the hierarchy for the Frames (if it is impt), the Treeview is under Excel_Frame</p>
<pre><code>root
Main_Frame
Load_Frame
Info_Frame
Body_Frame
Entry_Frame
Button_Frame
Excel_Frame
</code></pre>
|
<python><tkinter><treeview>
|
2025-10-25 16:31:44
| 1
| 367
|
user2741620
|
79,799,589
| 4,214,922
|
How do I exchange a service account for a tenant encoded token
|
<p>Using identity platform I'd like to exchange a service account for a tenant encoded token. I need to do this because our GitHub action needs to make use of our cli tool which requires sign in by a tenant user. And the GitHub action is using a service account set as a secret which has the role 'Service Account Token Creator'. I wrote the following code snippet to test this</p>
<pre><code>import json
import requests
import firebase_admin
from firebase_admin import credentials, auth
# Load service account
cred = credentials.Certificate('service_account.json')
# Initialize Firebase Admin SDK
firebase_admin.initialize_app(cred)
# Load service account to get the UID
with open('service_account.json', 'r') as f:
service_account = json.load(f)
# Tenant ID for developers
tenant_id = 'DEVELOPERS_TENANT_ID'
# Use a developer user UID
uid = 'SOME_RANDOM_USER_ID'
# Your Firebase Web API Key (replace with your actual API key)
api_key = 'API_KEY'
# Create custom token using Firebase Admin SDK with tenant ID
custom_token = auth.create_custom_token(uid)
# Decode bytes to string if needed
if isinstance(custom_token, bytes):
custom_token = custom_token.decode('utf-8')
print(f"Custom token:")
print(custom_token)
# Make API call to sign in with custom token
url = f"https://identitytoolkit.googleapis.com/v1/accounts:signInWithCustomToken?key={api_key}"
payload = {
"token": custom_token,
"returnSecureToken": True,
"tenantId": tenant_id
}
headers = {
"Content-Type": "application/json"
}
print("\nMaking API call to sign in...")
response = requests.post(url, json=payload, headers=headers)
if response.status_code == 200:
result = response.json()
print("Sign-in successful!")
print(f"ID Token: {result.get('idToken')}")
print(f"Refresh Token: {result.get('refreshToken')}")
print(f"Local ID: {result.get('localId')}")
print(f"Expires In: {result.get('expiresIn')} seconds")
else:
print(f"Sign-in failed with status code: {response.status_code}")
print(f"Error: {response.text}")
</code></pre>
<p>When I run the code, this error is thrown</p>
<pre><code>Sign-in failed with status code: 500
Error: {
"error": {
"code": 500,
"message": "Internal error encountered.",
"errors": [
{
"message": "Internal error encountered.",
"domain": "global",
"reason": "backendError"
}
],
"status": "INTERNAL"
}
}
</code></pre>
<p>Note: When I remove the <code>tenant_id</code> variable and <code>tenantId</code> from the <code>payload</code>. The sign in is a success. Which leads me to believe this is a <code>tenant</code> related issue.</p>
|
<python><google-cloud-platform><firebase-authentication><google-identity-toolkit>
|
2025-10-25 15:39:00
| 1
| 3,591
|
anonymous-dev
|
79,799,564
| 6,066,645
|
Why does my PySide6 DBUS ScreenSaver Signal-Listener not work?
|
<p>I am currently trying to connect to the Screensaver Signal (<code>org.freedesktop.ScreenSaver</code> - <code>/ScreenSaver</code> - <code>ActiveChanged(bool)</code>) that is emitted when the Screen-Saver is closed:</p>
<pre><code>$ dbus-monitor "interface='org.freedesktop.ScreenSaver'"
...
signal time=1761396762.341364 sender=:1.21 -> destination=(null destination) serial=94408 path=/ScreenSaver; interface=org.freedesktop.ScreenSaver; member=ActiveChanged
boolean true
...
signal time=1761396765.026613 sender=:1.21 -> destination=(null destination) serial=94464 path=/ScreenSaver; interface=org.freedesktop.ScreenSaver; member=ActiveChanged
boolean false
</code></pre>
<p>For the past few days, I have been banging my head against the wall trying to implement a DBUS Signal Listener for this signal using PySide6/Qt. This is my current attempt (combining multiple different attempts):</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from PySide6 import QtDBus, QtCore
from PySide6.QtCore import Slot, QObject
from typing import Annotated, get_type_hints
class DBusEventDispatcher(QtCore.QObject):
def __init__(self):
super().__init__()
sessionBus = QtDBus.QDBusConnection.sessionBus()
if not sessionBus.isConnected():
errorMessage = sessionBus.lastError().message()
raise Exception("Cannot connect to DBUS: " + errorMessage)
success = sessionBus.registerService("my_app")
print(success)
success = sessionBus.registerObject(
'/',
'org.freedesktop.ScreenSaver',
self,
QtDBus.QDBusConnection.ExportAllSlots
)
print(success)
#for path in ['org.freedesktop.ScreenSaver', 'org.gnome.ScreenSaver']:
# interface = QtDBus.QDBusInterface(path, '/ScreenSaver', '', sessionBus)
# if interface != None:
# interface.ActiveChanged.connect(self._onActiveChanged)
mo = self.metaObject()
for m in range(mo.methodOffset(), mo.methodCount()):
print(mo.method(m).methodSignature())
self.iface = QtDBus.QDBusInterface(
"org.freedesktop.ScreenSaver", # Service
"/ScreenSaver", # Path
"org.freedesktop.ScreenSaver", # Interface
sessionBus
)
print(self.iface.isValid())
self.iface.connect(
QtCore.SIGNAL("ActiveChanged(bool)"),
self.ActiveChanged
)
QtCore.QObject.connect(
self.iface,
QtCore.SIGNAL("ActiveChanged(bool)"),
self.ActiveChanged
)
# for service in ['org.freedesktop.ScreenSaver']: # 'org.gnome.ScreenSaver'
success = sessionBus.connect(
'org.freedesktop.ScreenSaver', # service,
'/ScreenSaver',
'org.freedesktop.ScreenSaver',
'ActiveChanged',
self,
QtCore.SLOT('ActiveChanged(bool)')
)
print(success)
@QtCore.Slot(bool)
def ActiveChanged(self, active: bool):
print("ActiveChanged")
print(active)
</code></pre>
<p>And I get positive return values for the different calls:</p>
<pre><code>$ python3 my_app.py
True
True
b'ActiveChanged(bool)'
True
True
</code></pre>
<p>But after that, nothing...
When I lock the screen, I see the message from <code>dbus-monitor</code> from above, but nothing from the python script.</p>
<p>Does anybody know what may be wrong with the script?</p>
<p>Could this be a system-issue, like some systemd-rule blocking the access?</p>
<p>Is there maybe some debug-flag that I could set to see what is happening inside the python / DBUS library?</p>
<p>Any other ideas?</p>
<p>Thanks for any help.</p>
<hr />
<pre><code>$ python3 --version
Python 3.12.3
$ pip3 list
Package Version
------------------ -------
cysystemd 2.0.1
pip 24.0
PySide6_Essentials 6.10.0
shiboken6 6.10.0
systemd 0.17.1
systemd-python 235
$ uname -a
Linux gerrit-framework 6.14.0-112033-tuxedo #33~24.04.1tux1 SMP PREEMPT_DYNAMIC Tue Sep 30 19:33:36 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/issue
TUXEDO OS 24.04.3 LTS \l
</code></pre>
<ul>
<li>KDE-Plasma-Version: 6.4.5</li>
<li>KDE-Frameworks-Version: 6.17.0</li>
<li>Qt-Version: 6.8.2</li>
<li>Graphics-Platform: Wayland</li>
</ul>
|
<python><qt><dbus><pyside6><freedesktop.org>
|
2025-10-25 14:48:41
| 0
| 440
|
Gerrit Addiks
|
79,799,443
| 428,542
|
Type-hinting a combined Mix-in class and subclass leads to TypeErrors
|
<p>The following code-snippet bridges some dataclasses and GUI-classes using PySide6 (the Qt library).</p>
<p>The <code>HasDataobject</code> class is key here. It defines a mix-in for subclasses of <code>QGraphicsItem</code>. It adds an attribute (<code>dataobject</code>) and method (<code>update_visibility()</code>) to those classes.</p>
<pre><code>from typing import TypeVar, Generic, Protocol
from PySide6.QtWidgets import QGraphicsItem, QGraphicsEllipseItem
# QGraphicsEllipseItem is a subclass of QGraphicsItem
### Data objects ###
class VisibleData:
def is_visible(self) -> bool:
return True
class MyNode(VisibleData):
pass
VisibleDataType = TypeVar('VisibleDataType', bound=VisibleData)
### Visual objects (using PySide6) ###
class QGraphicsItemProtocol(Protocol):
"""Define the methods of QGraphicsItem that HasDataobject uses."""
def setVisible(self, visible: bool, /):
...
class HasDataobject(Generic[VisibleDataType]):
"""Mix-in class. Adds an update_visibility() method, and
a dataobject attribute. The dataobject must have a
is_visible() method, as defined in VisibleData.
Any subclass of HasDataobject must also be a subclass of
QGraphicsItem, which defines setVisible()."""
dataobject: VisibleDataType
def update_visibility(self):
self.setVisible(self.dataobject.is_visible())
class Circle(QGraphicsEllipseItem, HasDataobject[MyNode]):
def __init__(self):
super().__init__()
pass
</code></pre>
<p>The above code works without issues. However, Pyright or other code validators will complain that <code>setVisible</code> is not a known method (reportAttributeAccessIssue):</p>
<pre><code>self.setVisible(self.dataobject.is_visible())
</code></pre>
<p>The easiest way to suppress this is to add <code># type: ignore</code>, but I prefer to make it explicit what is described in the docstring: <em>Any subclass of <code>HasDataobject</code> must also be a subclass of <code>QGraphicsItem</code></em>.</p>
<p>My initial thought was to make <code>HasDataobject</code> a subclass of <code>QGraphicsItem</code>:</p>
<pre><code>class HasDataobject(QGraphicsItem, Generic[VisibleDataType])
</code></pre>
<p>However, that lead to the following RuntimeError in <code>Circle.__init__</code> method:</p>
<blockquote>
<p>RuntimeError: You can't initialize an PySide6.QtWidgets.QGraphicsEllipseItem object in class Circle twice!</p>
</blockquote>
<p>So my second attempt is to use the <code>QGraphicsItemProtocol</code> define above:</p>
<pre><code>class HasDataobject(Generic[VisibleDataType], QGraphicsItemProtocol)
</code></pre>
<p>However, that gives a</p>
<blockquote>
<p>TypeError: Cannot create a consistent method resolution order (MRO) for bases Generic, QGraphicsItemProtocol</p>
</blockquote>
<p>So next I tried to reverse the two base classes:</p>
<pre><code>class HasDataobject(QGraphicsItemProtocol, Generic[VisibleDataType])
</code></pre>
<p>However, that again leads to a problem when defining the Circle class:</p>
<blockquote>
<p>TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases</p>
</blockquote>
<p>I'm a bit stuck now. How can I use type hints in this code and make Pyright (and myself) happy?</p>
<p>PS: Any suggestions and best practices are appreciated. I even toyed with the idea to make <code>HasDataobject</code> a genuine class (<em>has-a</em> <code>QGraphicsItem</code> instead of <em>is-a</em> <code>QGraphicsItem</code>) instead of a mix-in, but subclassing is really beneficial since that enables the power of Qt with things like <code>scene.addItem(a_circle)</code>.</p>
|
<python><python-typing><mixins><pyside6>
|
2025-10-25 10:27:27
| 1
| 3,568
|
MacFreek
|
79,799,441
| 16,220,410
|
no visual active indication of python environment in linux mint terminal
|
<p>i cant figure out why there is no visual indication that the python environment is active, i have created the venv with uv, i have installed the flet framework inside it and i can run the python file without issue but there is no indication in the terminal that it is active after it is activated, i am using Oh-My-Posh customization for my terminal just to add</p>
<p><a href="https://i.sstatic.net/geH5xFIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/geH5xFIz.png" alt="environment no visual active indicator" /></a></p>
<p>bashrc config</p>
<pre><code># ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
#force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Add an "alert" alias for long running commands. Use like so:
# sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
#echo "Loading .bashrc"
POSH_THEME="sim-web"
eval "$(oh-my-posh init bash --config /home/kidlinux/.cache/oh-my-posh/themes/$POSH_THEME.omp.json)"
. "$HOME/.cargo/env"
# if [ -z "$TMUX" ]; then
# tmux attach-session -t main || tmux new-session -s main
# fi
# alias tn='tmux new-session -s'
# alias tl='tmux list-sessions'
# alias ta='tmux attach-session'
alias y='yazi'
# NAVIGATE WITH YAZI SELECT FOLDER AUTO OPEN NVIM AFTER QUIT
nvim-yazi() {
local chosen=$(yazi --cwd-file=/tmp/yazi-cwd)
if [ -f /tmp/yazi-cwd ]; then
cd "$(cat /tmp/yazi-cwd)"
nvim .
rm /tmp/yazi-cwd
fi
}
alias nv='nvim-yazi'
</code></pre>
<p>1_shell.omp.json</p>
<pre><code>{
"$schema": "https://raw.githubusercontent.com/JanDeDobbeleer/oh-my-posh/main/themes/schema.json",
"blocks": [
{
"alignment": "left",
"newline": true,
"segments": [
{
"foreground": "#ffbebc",
"leading_diamond": "<#ff70a6> \ue200 </>",
"properties": {
"display_host": true
},
"style": "diamond",
"template": "{{ .UserName }} <#ffffff>on</>",
"type": "session"
},
{
"foreground": "#bc93ff",
"properties": {
"time_format": "Monday <#ffffff>at</> 3:04 PM"
},
"style": "diamond",
"template": " {{ .CurrentDate | date .Format }} ",
"type": "time"
},
{
"foreground": "#ee79d1",
"properties": {
"branch_icon": "\ue725 ",
"fetch_status": true,
"fetch_upstream_icon": true,
"fetch_worktree_count": true
},
"style": "diamond",
"template": " {{ .UpstreamIcon }}{{ .HEAD }}{{if .BranchStatus }} {{ .BranchStatus }}{{ end }}{{ if .Working.Changed }} \uf044 {{ .Working.String }}{{ end }}{{ if and (.Working.Changed) (.Staging.Changed) }} |{{ end }}{{ if .Staging.Changed }} \uf046 {{ .Staging.String }}{{ end }}{{ if gt .StashCount 0 }} \ueb4b {{ .StashCount }}{{ end }} ",
"type": "git"
}
],
"type": "prompt"
},
{
"alignment": "right",
"segments": [
{
"foreground": "#a9ffb4",
"style": "plain",
"type": "text"
},
{
"foreground": "#a9ffb4",
"properties": {
"style": "dallas",
"threshold": 0
},
"style": "diamond",
"template": " {{ .FormattedMs }}s <#ffffff>\ue601</>",
"type": "executiontime"
},
{
"properties": {
"root_icon": "\uf292 "
},
"style": "diamond",
"template": " \uf0e7 ",
"type": "root"
},
{
"foreground": "#94ffa2",
"style": "diamond",
"template": " <#ffffff>MEM:</> {{ round .PhysicalPercentUsed .Precision }}% ({{ (div ((sub .PhysicalTotalMemory .PhysicalAvailableMemory)|float64) 1073741824.0) }}/{{ (div .PhysicalTotalMemory 1073741824.0) }}GB)",
"type": "sysinfo"
}
],
"type": "prompt"
},
{
"alignment": "left",
"newline": true,
"segments": [
{
"foreground": "#ffafd2",
"leading_diamond": "<#00c7fc> \ue285 </><#ffafd2>{</>",
"properties": {
"folder_icon": "\uf07b",
"folder_separator_icon": " \uebcb ",
"home_icon": "home",
"style": "agnoster_full"
},
"style": "diamond",
"template": " \ue5ff {{ .Path }} ",
"trailing_diamond": "<#ffafd2>}</>",
"type": "path"
},
{
"foreground": "#A9FFB4",
"foreground_templates": ["{{ if gt .Code 0 }}#ef5350{{ end }}"],
"properties": {
"always_enabled": true
},
"style": "plain",
"template": " \ue286 ",
"type": "status"
}
],
"type": "prompt"
}
],
"console_title_template": "{{ .Folder }}",
"transient_prompt": {
"background": "transparent",
"foreground": "#FEF5ED",
"template": "\ue285 "
},
"version": 3
}
</code></pre>
|
<python><bash><oh-my-posh>
|
2025-10-25 10:26:10
| 1
| 1,277
|
k1dr0ck
|
79,799,270
| 1,796,123
|
How to reuse a Django model for multiple relationships
|
<p>I want to make a task model and a user model. And I want each task to be able to be related to 3 users. Each task should be related to a creator user, an assignee user, and a verifier user. And I want to only have one user table. My inclination is to have 3 foreign keys on the task table: creator_id, assignee_id, and verifier_id. Is this the correct way to do it? How do I model that in Django?</p>
<p>UPDATE</p>
<p>Here's my simplified models</p>
<pre><code>class User(models.Model):
id = models.CharField(primary_key=True, max_length=100)
name = models.CharField(max_length=100)
class Task(models.Model):
id = models.CharField(primary_key=True, max_length=100)
title = models.CharField(max_length=100)
creator = models.ForeignKey('User', on_delete=models.DO_NOTHING, related_name='creator_tasks')
assignee = models.ForeignKey('User', on_delete=models.DO_NOTHING, related_name='user_tasks')
verifier = models.ForeignKey('User', on_delete=models.DO_NOTHING, related_name='verifier_tasks')
</code></pre>
<p>And here's the error I get when I relate a user to a task as a creator and try to get the task's creator:</p>
<pre><code>$ python manage.py shell
>>> from todoapp.models import User
>>> from todoapp.models import Task
>>> user = User.objects.get(id='1')
>>> task = Task.objects.get(id='1')
>>> task.creator_id = user.id
>>> task.save()
>>> task.creator
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/josh/.venv/myenv/lib/python3.9/site-packages/django/db/models/fields/related_descriptors.py", line 188, in __get__
raise self.RelatedObjectDoesNotExist(
todoapp.models.Task.creator.RelatedObjectDoesNotExist: Task has no creator.
</code></pre>
|
<python><django><django-models>
|
2025-10-25 01:23:17
| 2
| 6,410
|
wetjosh
|
79,799,210
| 1,727,657
|
Confused about variable scope - why is x local in this function and why isn't it associated with a variable?
|
<p>When I run the Python script below, I get the following error from Ipython (v3.14):</p>
<pre><code>UnboundLocalError: cannot access local variable 'x' where it is not associated with a value
</code></pre>
<p>and it highlights the 2nd <code>x</code> in <code>print('x = ',x)</code>. But if I comment out the next line <code>x = x + 1</code> the error disappears. I'm confused by this behavior because I declare <code>x</code> outside the function so it should be global, right? And why does the line <code>x = x + 1</code> have anything to do with this?</p>
<pre><code>x = 0
def testfunction():
print('x = ',x)
x = x + 1
testfunction()
print ('x = ',x)
</code></pre>
|
<python>
|
2025-10-24 22:24:11
| 0
| 477
|
OutThere
|
79,799,126
| 784,318
|
Line unnecessarily expanded
|
<p>In my Python code I have a line that looks like so:</p>
<pre><code>tm = tm[0:3] + [0,] + tm[3:6] + [0,]
</code></pre>
<p>Ruff attempts to fix this with the following 10 lines, which are arguably less legible:</p>
<pre><code>tm = (
tm[0:3]
+ [
0,
]
+ tm[3:6]
+ [
0,
]
)
</code></pre>
<p>Why is Ruff expanding the statement? Is there a way to disable this behaviour?</p>
|
<python><ruff>
|
2025-10-24 20:08:03
| 1
| 23,067
|
Besi
|
79,799,057
| 11,581,214
|
Applying JavaScript formatting to IronPDF form using PyMuPDF
|
<p>I am using IronPdf (2025.6.1.5) to create fillable PDF forms. This involves creating HTML forms and converting them to PDFs with IronPDF. I am post-processing the PDFs with PyMuPDF (1.26.4) to apply JavaScript controls to fields to specify currency, percentage, and number formats. I am using the Adobe Acrobat <code>AFNumber_Format()</code> and <code>AFNumber_Keystroke()</code> functions. Note: Using IronPDF greatly simplifies conversion of HTML forms to PDF and addresses cosmetic content and formatting. The issue is with the fine control of field formats.</p>
<p>For comparison and troubleshooting, I have created a (mostly) equivalent PDF form directly (from scratch) with PyMuPDF. The PyMuPDF form provides form field (widget) behavior as expected for keystroke and number format in Chrome and Acrobat. The PyMuPDF form shows field properties in Acrobat Pro as one would expect. Attempts to type invalid characters are blocked in Acrobat. An alert dialog is displayed in Chrome. Formatting (including currency and percent symbols and decimal places) is applied on loss of focus.</p>
<p><a href="https://i.sstatic.net/Cb2EaLJr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Cb2EaLJr.png" alt="Alert dialog for invalid input" /></a></p>
<p>The IronPDF form provides partial JavaScript functionality in Chrome. Attempting to type invalid (non-number) characters into form fields produces the alert dialog (above) to identify invalid input. The specified number formatting is not applied, however.</p>
<p>A comparison of the post-processed IronPDF form and the PyMuPDF form reveals the following:</p>
<p><strong>PyMuPDF:</strong></p>
<pre><code>pdf __dict__: {'is_closed': False, 'is_encrypted': False, 'metadata': {'format': 'PDF 1.7', 'title': '', 'author': '', 'subject': '', 'keywords': '', 'creator': '', 'producer': '', 'creationDate': '', 'modDate': '', 'trapped': '', 'encryption': None}, 'FontInfos': [], 'Graftmaps': {}, 'ShownPages': {}, 'InsertedImages': {}, '_page_refs': <WeakValueDictionary at 0x21e6ca0dd60>, '_name': '../ACL_Forms/FormattedPDF/fillable_form2.pdf', 'stream': None, 'thisown': True, '_graft_id': 4, '_outline': <pymupdf.Outline object at 0x0000021E6D8ECDF0>}
</code></pre>
<p><strong>IronPDF + PyMuPDF:</strong></p>
<pre><code>pdf __dict__: {'is_closed': False, 'is_encrypted': False, 'metadata': {'format': 'PDF 1.4', 'title': 'This is a Test', 'author': '', 'subject': '', 'keywords': '', 'creator': 'IronPdf', 'producer': 'IronPdf v2025.6.16', 'creationDate': "D:20251024154451+00'00'", 'modDate': "D:20251024154451+00'00'", 'trapped': '', 'encryption': None}, 'FontInfos': [], 'Graftmaps': {}, 'ShownPages': {}, 'InsertedImages': {}, '_page_refs': <WeakValueDictionary at 0x21e6ca876f0>, '_name': '../ACL_Forms/FormattedPDF/fefillable_form2.pdf', 'stream': None, 'thisown': True, '_graft_id': 2, '_outline': <pymupdf.Outline object at 0x0000021E6E868AC0>}
</code></pre>
<p>Widget script and text parameters are equivalent. (This is the currency field.)</p>
<pre><code>widget script: None
widget script_blur: None
widget script_calc: None
widget script_change: None
widget script_focus: None
widget script_format: AFNumber_Format(2, 0, 0, 0, "$ ", true);
widget script_stroke: AFNumber_Keystroke(2, 0, 0, 0, "$ ", true);
widget text_color: [0.0, 0.0, 0.0]
widget text_font: Helv
widget text_fontsize: 12.0
widget text_format: 1
widget text_maxlen: 0
</code></pre>
<p>There is a difference in the widget annotation info, however:</p>
<p><strong>PyMuPDF:</strong></p>
<pre><code>widget._annot info: {'content': '', 'name': '', 'title': 'FieldName_Currency', 'creationDate': '', 'modDate': '', 'subject': '', 'id': 'fitz-W0'}
</code></pre>
<p><strong>IronPDF + PyMuPDF:</strong></p>
<pre><code>widget._annot info: {'content': '', 'name': '', 'title': '', 'creationDate': '', 'modDate': '', 'subject': '', 'id': ''}
</code></pre>
<p>Attempts to <code>set_info()</code> for the widget annotation leave the <code>widget._annot.info</code> unchanged.</p>
<p>PDFs and python code have been uploaded to <a href="https://github.com/BalooRM/PDFFormat/tree/main/StackOverflow" rel="nofollow noreferrer">Github</a>.</p>
<p>Any assistance in identifying the missing property is appreciated. Thank you.</p>
|
<javascript><python><pdf><pymupdf><ironpdf>
|
2025-10-24 18:05:14
| 0
| 524
|
BalooRM
|
79,798,997
| 1,194,864
|
Compute instance on demand on google cloud for a websocket application
|
<p>I have created a WebSocket application, and I have stored my server on a VM running on google-server-platform as an Compute Engine (I am really custom to the correct terminology). This VM is constantly running and listening to the client. To run the server, I am using an SSH connection to activate and then running my Python script for the WebSocket. However, that means that the server is keep constantly running.</p>
<p>What I want is to update the logic of the application and run the server on demand. So instead of allocating the resources constantly, I would like to once the client app starts to create a compute instance on Google Cloud Server once the client app starts, and run the server Python file to perform the things I want (run my deep learning model from there). Is there a pipeline with which I can do this thingy?</p>
<p>How can install Python and my dependencies in this case?</p>
<p>Edit: What I have built so far is a local client that is an HTML/JavaScript website that creates a page that accesses the camera and sends images to the server, which I want to store in the cloud. So far, I am doing it using a compute instance VM that is constantly running. There, I am constantly running a Python script that receives the video stream, processes it in real-time (by applying a Deep learning model), and sends a reply to the client. I want this server script to run on demand and not constantly.</p>
|
<python><google-cloud-platform><websocket><google-compute-engine>
|
2025-10-24 16:52:03
| 2
| 5,452
|
Jose Ramon
|
79,798,940
| 1,818,713
|
python typing distinctions between inline created parameters and variables
|
<h4>Preamble</h4>
<p>I'm using polars's <code>write_excel</code> method which has a parameter <code>column_formats</code> which wants a <code>ColumnFormatDict</code> that is <a href="https://github.com/pola-rs/polars/blob/main/py-polars/src/polars/_typing.py" rel="nofollow noreferrer">defined here and below</a></p>
<pre class="lang-py prettyprint-override"><code>ColumnFormatDict: TypeAlias = Mapping[
# dict of colname(s) or selector(s) to format string or dict
Union[ColumnNameOrSelector, tuple[ColumnNameOrSelector, ...]],
Union[str, Mapping[str, str]],
]
ColumnNameOrSelector: TypeAlias = Union[str, SelectorType]
SelectorType: TypeAlias = "Selector"
</code></pre>
<p>If I use it like this:</p>
<pre class="lang-py prettyprint-override"><code>df.write_excel(..., column_formats={"a":"#,##0;[Red]-#,##0"})
</code></pre>
<p>then there is no complaint from the type checker.</p>
<p>If I do:</p>
<pre class="lang-py prettyprint-override"><code>column_formats = {"a":"#,##0;[Red]-#,##0"}
df.write_excel(..., column_formats=column_formats)
</code></pre>
<p>then it complains:</p>
<pre><code>Argument of type "dict[str, str]" cannot be assigned to parameter "column_formats" of type "ColumnFormatDict | None" in function "write_excel"
Type "dict[str, str]" is not assignable to type "ColumnFormatDict | None"
"dict[str, str]" is not assignable to "Mapping[ColumnNameOrSelector | tuple[ColumnNameOrSelector, ...], str | Mapping[str, str]]"
Type parameter "_KT@Mapping" is invariant, but "str" is not the same as "ColumnNameOrSelector | tuple[ColumnNameOrSelector, ...]"
"dict[str, str]" is not assignable to "None"
</code></pre>
<p>where the relevant line is: <code>Type parameter "_KT@Mapping" is invariant, but "str" is not the same as "ColumnNameOrSelector | tuple[ColumnNameOrSelector, ...]"</code></p>
<p>I discovered that if I annotate my variable as</p>
<pre class="lang-py prettyprint-override"><code>column_formats:dict[str|cs.Selector|tuple[str|cs.Selector], str]={"a":"abc"}
</code></pre>
<p>then it won't complain but the parameter is meant to be flexible and its intended flexibility is seemingly making it less flexible.</p>
<h4>Questions:</h4>
<ol>
<li>Why, in the first case, is a <code>str</code> a valid <code>ColumnNameOrSelector | tuple[ColumnNameOrSelector, ...]</code> but in the second case it isn't?</li>
</ol>
<p>It's interesting that it is only complaining about the key and not the value portion of the <code>Mapping</code>, in other words I didn't have to annotate it as <code>dict[str|cs.Selector|tuple[str|cs.Selector], str|dict[str,str]]</code></p>
<ol start="2">
<li>Additionally, suppose I want to put in a PR to amend <code>ColumnNameOrSelector</code> is there a way to define it such that my second usage doesn't generate a type warning without an explicit annotation?</li>
</ol>
<p>Assuming it can be done, I'm guessing there needs to be a mapping for every key possibility so maybe this (or maybe even that last tuple possibility needs to be split up to a 4th Mapping case.</p>
<pre><code>ColumnFormatDict: TypeAlias = Union[
Mapping[str, Union[str, Mapping[str, str]]],
Mapping[SelectorType, Union[str, Mapping[str, str]]],
Mapping[tuple[str|SelectorType, ...], Union[str, Mapping[str, str]]],
]
</code></pre>
<ol start="3">
<li>Is that guess on the right track or way off?</li>
</ol>
|
<python><python-typing><python-polars>
|
2025-10-24 15:52:54
| 2
| 19,938
|
Dean MacGregor
|
79,798,928
| 10,034,073
|
Access expected type in Pydantic within a field wrap validator using the annotated pattern
|
<p>I'm using a <a href="https://docs.pydantic.dev/latest/concepts/validators/#field-wrap-validator" rel="nofollow noreferrer">field wrap validator</a> in Pydantic with the Annotated pattern, and I want to access the expected/annotated type of a field from within the validator function. Here's an example of what I want:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated, Any
from pydantic import (
BaseModel, ValidationError, ValidationInfo,
ValidatorFunctionWrapHandler, WrapValidator
)
def wrapper(value: Any,
handler: ValidatorFunctionWrapHandler,
info: ValidationInfo) -> Any:
try:
return handler(value)
except ValidationError as e:
# Custom error handling where I want to know the expected type.
# I'm looking for something like this:
if info.annotation == str:
# Do something
elif info.annotation == int | bool:
# Do something else
else:
raise
class MyModel(BaseModel):
foo: Annotated[str, WrapValidator(wrapper)]
class AnotherModel(BaseModel):
bar: Annotated[int | bool, WrapValidator(wrapper)]
</code></pre>
<p>I would have expected <a href="https://docs.pydantic.dev/latest/api/pydantic_core_schema/#pydantic_core.core_schema.ValidationInfo.context" rel="nofollow noreferrer"><code>ValidationInfo</code></a> to include the expected data type of <code>value</code>, but it doesn't seem to.</p>
<p>Another option is to go through the model fields. Something like this:
<code>MyModel.model_fields[info.field_name].annotation</code>.</p>
<p>However, this wrapper is applied to multiple models, so I can't just reference <code>MyModel</code> explicitly like that. And again <code>ValidationInfo</code> doesn't seem to include any reference back to the model.</p>
<p>I noticed that <code>info.config = {"title": "MyModel"}</code>. Is there a way to get the <code>MyModel</code> class from the <code>"MyModel"</code> string, and is this a reliable means of finding the model?</p>
<hr />
<p>This is not a duplicate of <a href="https://stackoverflow.com/questions/73566439/how-to-get-the-type-of-a-validated-field-in-pydantic-validator-method">this question</a>, which concerns the <strong>decorator</strong> pattern for validators, not the <strong>annotated</strong> pattern, and also uses old syntax from v1.</p>
|
<python><validation><python-typing><pydantic><pydantic-v2>
|
2025-10-24 15:33:29
| 1
| 444
|
kviLL
|
79,798,911
| 5,877,334
|
Microsoft Graph API: Intune Win32 App Upload - commitFileFailed Error After Successful API Calls
|
<p>I'm experiencing persistent <code>commitFileFailed</code> errors when uploading .intunewin files to Microsoft Intune via the Graph API. All API calls return success status codes, but the final commit processing fails internally.</p>
<p><strong>API Workflow</strong></p>
<p>Following the standard Microsoft Graph API sequence for Win32 app uploads:</p>
<ol>
<li><code>POST /deviceAppManagement/mobileApps</code> (Create Win32LobApp)</li>
<li><code>POST /mobileApps/{id}/contentVersions</code> (Create content version)</li>
<li><code>POST /contentVersions/{version}/files</code> (Create file entry)</li>
<li>Wait for Azure Storage URI (Poll until <code>azureStorageUriRequestSuccess</code>)</li>
<li><code>PUT</code> to Azure Storage URI (Upload .intunewin file)</li>
<li><code>POST /files/{fileId}/commit</code> (Commit file) (Returns HTTP 200)</li>
</ol>
<p><strong>Issue Pattern</strong></p>
<ul>
<li><strong>Step 1-5</strong>: All succeed with proper HTTP 201/200 responses</li>
<li><strong>Step 6</strong>: Commit API returns HTTP 200 OK</li>
<li><strong>Final Result</strong>: <code>uploadState: commitFileFailed</code></li>
</ul>
<p>The failure occurs during Intune's internal processing <strong>after</strong> the commit API succeeds.</p>
<p><strong>FileEncryptionInfo Payload Structure</strong></p>
<pre class="lang-json prettyprint-override"><code>{
"fileEncryptionInfo": {
"encryptionKey": "base64-encoded-32-bytes",
"macKey": "base64-encoded-32-bytes",
"initializationVector": "base64-encoded-16-bytes",
"mac": "base64-encoded-32-bytes",
"profileIdentifier": "ProfileVersion1",
"fileDigest": "base64-encoded-32-bytes",
"fileDigestAlgorithm": "SHA256",
"@odata.type": "microsoft.graph.fileEncryptionInfo"
},
"size": 3064190,
"name": "example-installer.intunewin"
}
</code></pre>
<h4>Validation Performed</h4>
<p><strong>Field Validation</strong>: All base64 fields decode to Microsoft's required byte lengths:</p>
<ul>
<li><code>initializationVector</code>: 16 bytes</li>
<li><code>mac</code>: 32 bytes</li>
<li><code>macKey</code>: 32 bytes</li>
<li><code>encryptionKey</code>: 32 bytes</li>
<li><code>fileDigest</code>: 32 bytes</li>
</ul>
<p><strong>File Creation</strong>: Using official Microsoft Win32 Content Prep Tool v1.8.7</p>
<p><strong>Encryption Metadata</strong>: Extracted directly from <code>.intunewin</code> Detection.xml</p>
<p><strong>Azure Storage</strong>: Proper headers, MD5 validation, successful upload with matching ETag</p>
<p><strong>API Compliance</strong>: Following Microsoft Graph API documentation exactly</p>
<h4>Environment Details</h4>
<ul>
<li><strong>API Version</strong>: Microsoft Graph v1.0</li>
<li><strong>Authentication</strong>: App Registration with <code>DeviceManagementApps.ReadWrite.All</code></li>
<li><strong>Upload Method</strong>: Single PUT (files < 4MB per Microsoft guidance)</li>
<li><strong>File Format</strong>: .intunewin packages created with Microsoft's official tool</li>
</ul>
<p><strong>Has anyone encountered</strong> similar issues where the commit API succeeds but processing fails?
<strong>Are there additional headers or fields</strong> required for the commit request that aren't documented?</p>
<h4>What I've Tried</h4>
<ul>
<li>Validated all encryption fields meet Microsoft's binary length requirements</li>
<li>Confirmed .intunewin file structure matches Microsoft format</li>
<li>Tested with different file sizes and types</li>
<li>Added delays for backend processing</li>
<li>Verified Azure Storage upload integrity with MD5 checksums</li>
</ul>
<p>The issue occurs consistently across different .intunewin files. All client-side validation passes Microsoft's specifications, suggesting this might be a service-side processing issue.</p>
<p>Thanks for Any help!</p>
|
<python><azure><microsoft-graph-api><microsoft-graph-intune>
|
2025-10-24 15:12:03
| 0
| 1,099
|
aprasanth
|
79,798,721
| 5,378,816
|
TaskGroup in Python 3.11 freezes if one task raises an exception - is it a known bug?
|
<p>There are two tasks in a task group. One of them raises, the other one should be cancelled. This works fine in Python 3.12+, but freezes in Python 3.11. Older versions did not support task groups.</p>
<p>Is this a known problem? They will probably not fix a bug in 3.11 at this stage of its life-cycle. I'm looking for information how to avoid, solve or mitigate the issue. So far I found out that small changes in the code make a difference. And it looks like the <code>wait_for</code> plays some role there.</p>
<pre><code>import asyncio
ev = asyncio.Event()
async def t1():
try:
while True:
try:
print("Waiting")
await asyncio.wait_for(ev.wait(), 99)
print("Done waiting")
ev.clear()
except TimeoutError:
print("Timeout")
raise
except asyncio.CancelledError:
print("Cancelled - as expected")
raise
async def t2():
ev.set()
raise RuntimeError()
async def main():
try:
async with asyncio.TaskGroup() as tg:
tg.create_task(t1())
tg.create_task(t2())
except* RuntimeError:
print("RuntimeError - as expected")
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>Normal output:</p>
<pre class="lang-none prettyprint-override"><code>Waiting
Done waiting
Waiting
Cancelled - as expected
RuntimeError - as expected
</code></pre>
<p>Wrong output in Python 3.11:</p>
<pre class="lang-none prettyprint-override"><code>Waiting
Done waiting
Waiting
</code></pre>
<p>And then it hangs until Ctrl-C is pressed twice.</p>
|
<python><python-asyncio><python-3.11>
|
2025-10-24 11:34:31
| 1
| 17,998
|
VPfB
|
79,798,294
| 4,256,387
|
Cython memoryview pointer to empty array
|
<p>The Cython docs discuss passing a <a href="https://cython.readthedocs.io/en/latest/src/userguide/memoryviews.html#pass-data-from-a-c-function-via-pointer" rel="nofollow noreferrer">pointer to a memoryview</a> by using, <em>e.g.</em>:</p>
<pre><code>cdef double[::1] x_view = x # some array
cdef double* ptr = &x_view[0]
</code></pre>
<p>however, when that array <code>x</code> has no elements (<code>x.size == 0</code>), trying to access <code>x_view[0]</code> fails. For the C library to which I am making an interface, passing a <code>NULL</code> pointer leads to a segfault, so I can't just check <code>x.size</code> and pass <code>NULL</code>.</p>
<p>Instead, I need to pass a pointer to a valid memory block of size 0. Using the old numpy array interface and passing <code><double*>x.data</code> works even in the case of an empty array like <code>x = np.array([])</code>, but I am trying to move to using typed memoryviews with fused types to avoid code duplication and have more easily-enforceable type safety.</p>
<p>Is there an accepted way to handle this situation? Or are there plans for a feature like <code>x_view.ptr</code> that would handle this case internally?</p>
|
<python><cython>
|
2025-10-24 00:03:31
| 1
| 404
|
Bernie Roesler
|
79,798,283
| 3,577,105
|
PyQt: how to prevent QPushButton from growing to fit long text
|
<p>I have a <code>QPushButton</code> whose width I'd like to keep at a fixed percentage of its parent <code>HBoxLayout</code> width, so that it doesn't grow when its text is set to something very long.</p>
<p><a href="https://i.sstatic.net/xqdCH2iI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xqdCH2iI.png" alt="enter image description here" /></a></p>
<p>Right now, when the button's text is set to something very long, the button expands to fit that long text, and also expands the <code>HBoxLayout</code> and its ancestors all the way up to the window.</p>
<p><a href="https://i.sstatic.net/rUfxqbSk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUfxqbSk.png" alt="enter image description here" /></a></p>
<p>I do plan to do a custom elide using <code>QFontMetrics</code> to try to make some elided text that fits the current pushbutton width. But, I also want to handle the case where the custom elide result is a bit wider than expected: I don't want this to cause the entire window to get wider. If the text is too wide, I would rather just chop the text off.</p>
<p>So: here was one attempt, which would only make sense if the window is non-resizable:</p>
<pre><code>self.ui.pushButton.setFixedWidth(self.ui.pushButton.width())
</code></pre>
<p>Adding this line does successfully prevent the widget from growing - longer text is just chopped off - but it also makes the widget narrower:</p>
<p><a href="https://i.sstatic.net/19UHTPJ3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19UHTPJ3.png" alt="enter image description here" /></a></p>
<p>In the end application I have 2 labels and a combobox in this same <code>HBoxLayout</code> - I don't want them to be squeezed / reduced either.</p>
<p>This is probably due to my failure to understand how the various size hints and size policies work together. Back to the original question, what's the right way to make this 'force-elided' pushbutton, at its 'full' 'normal' width?</p>
<p>Below is the code:</p>
<p>guiTest.py (top level code):</p>
<pre><code>from widthTest_ui import Ui_Dialog
from PyQt5.QtWidgets import QDialog,QApplication
import sys
class widthTestDialog(QDialog,Ui_Dialog):
def __init__(self,parent):
QDialog.__init__(self)
self.parent=parent
self.ui=Ui_Dialog()
self.ui.setupUi(self)
# self.ui.pushButton.setFixedWidth(self.ui.pushButton.width())
self.txt=''
def onClick(self,e):
self.txt+='a'
self.ui.pushButton.setText(self.txt)
def main():
app = QApplication(sys.argv)
global w # so that eFilter can call methods of the top level widget
w = widthTestDialog(app)
w.show()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
</code></pre>
<p>widthTest_ui.py - from pyuic5:</p>
<pre><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(176, 87)
self.verticalLayout = QtWidgets.QVBoxLayout(Dialog)
self.verticalLayout.setObjectName("verticalLayout")
self.horizontalLayout = QtWidgets.QHBoxLayout()
self.horizontalLayout.setObjectName("horizontalLayout")
self.pushButton = QtWidgets.QPushButton(Dialog)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.pushButton.sizePolicy().hasHeightForWidth())
self.pushButton.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setFamily("Segoe UI")
font.setPointSize(12)
self.pushButton.setFont(font)
self.pushButton.setObjectName("pushButton")
self.horizontalLayout.addWidget(self.pushButton)
self.verticalLayout.addLayout(self.horizontalLayout)
self.retranslateUi(Dialog)
self.pushButton.clicked.connect(Dialog.onClick) # type: ignore
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
_translate = QtCore.QCoreApplication.translate
Dialog.setWindowTitle(_translate("Dialog", "Dialog"))
self.pushButton.setText(_translate("Dialog", "PushButton"))
</code></pre>
<p><strong>UPDATE</strong> based on comments:</p>
<p>As suggested, uncommenting that line in <strong>init</strong> and using sizeHint works - the widget fills the 'initially expected' width, and doesn't grow as text gets too long:</p>
<pre><code>self.ui.pushButton.setFixedWidth(self.ui.pushButton.sizeHint().width())
</code></pre>
<hr />
<p><strong>NOTE:</strong> that concludes the answer to the initial question; will post it as an answer; the portion of the question below here is really just 'getting greedy' and is expanding the scope of the initial question, so, doesn't need to be addressed here. To avoid it, I'd set the window to not be resizable.</p>
<hr />
<p>Now, I'd like to see if I can reapply that logic on window resize: as the window grows, the widget grows proportionally, but then stays there with the same fixedWidth logic. Seems like you would need to 'release' the fixed size to allow it to resize proportionally as normal, and then fix the size again when the resize is done. Was hoping this would do the trick, but alas by the time resizeEvent is fired, it's too late:</p>
<pre><code>def resizeEvent(self,e):
print('resized')
self.ui.pushButton.setMinimumSize(0,0)
self.ui.pushButton.setMaximumSize(QWIDGETSIZE_MAX, QWIDGETSIZE_MAX)
self.ui.pushButton.setFixedWidth(self.ui.pushButton.sizeHint().width())
</code></pre>
<p>There's probably a way to accomplish that by handling a mouse press on the window edge before the resize happens, but that sounds a bit obscure and hacky. Wondering if there's a better way to achieve the overall goal - maybe just some stricter custom elide logic, and not try to do the setFixedWidth.</p>
|
<python><pyqt5><qpushbutton><elide>
|
2025-10-23 23:42:33
| 1
| 904
|
Tom Grundy
|
79,798,080
| 978,392
|
Can uv integrate with e.g. pytorch prebuilt docker env?
|
<p>So, pytorch requires a rather large bundle of packages. The prebuilt docker pytorch gpu images (<a href="https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/running.html" rel="nofollow noreferrer">https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/running.html</a>) are quite helpful in this regard.</p>
<p>On the other hand, uv and its lockfile are also a very handy utility.</p>
<p>How to marry those two? I.e. I am looking for a dockerfile, which roughly looks like</p>
<pre class="lang-none prettyprint-override"><code>FROM nvcr.io/nvidia/pytorch:<xx.xx>-py3
# more preparation ...
RUN uv sync --locked # does NOT fetch pytorch again
# some more preparation ...
CMD ["python", "-m", ""src.main:train_model"]
</code></pre>
<p>or, alternatively: Is it possible to tell uv somehow that some deps, albeit mentioned in the uv.lock, are provided by the global environment which provides python and pytorch?</p>
|
<python><docker><pytorch><uv>
|
2025-10-23 18:18:49
| 1
| 5,327
|
helt
|
79,798,077
| 10,916,136
|
Unable to create a dist/.app file using py2app on MacOS
|
<p>I am trying to create a standalone python-based application on MacOS. It is a simple GUI using tinker that does a few calculations and shows the result on the screen. It also generates a json file and saves it at the directory at which the file is read at.</p>
<p>I have build it as an exe file on windows using pyinstaller successfully. And it works perfectly. No errors.</p>
<p>Now I am trying to do the same thing on MacOS using py2app. I have python installed through conda. py2app also installed through conda on the same environment.</p>
<p>When I run the <code>python3 setup.py py2app</code> command, it generates the <code>build</code> and <code>dist</code> directory. However, only the <code>build</code> directory has many files. The <code>dist</code> directory stays empty.</p>
<p>Expectation: A <code>.app</code> file should have been created in the <code>dist</code> directory.</p>
<p>How to solve this?</p>
|
<python><macos><py2app>
|
2025-10-23 18:17:29
| 0
| 571
|
Veki
|
79,798,047
| 9,010,859
|
How to convert Excel file with multiple header and sub header to Nested Json as show
|
<p><a href="https://i.sstatic.net/n9inJNPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n9inJNPN.png" alt="enter image description here" /></a></p>
<p>i have tried to convert the excel file data as shown in the picture from excel to Json format but not able to properly convert to the structure that i required, the structure I required is as below</p>
<pre><code>{
"ID": "000 001 234",
"Label": "XYZ",
"Category 1": { "Sub1": "129.75","Sub2" : "0.00","Sub3":"0.00" }
"Order Amount": "234.00",
"Penalty": "111.00",
"Fees": "3,456.00",
"Category2": { "Sub21": "0.00","Sub22" : "0.00","Sub23": "0.00" }
"Invoice": "11.00"
},
{
"ID": "000 001 235",
"Label": "XYZ",
"Category 1": { "Sub1": "1.75","Sub2" : "0.00","Sub3":"0.00" }
"Order Amount": "111.00",
"Penalty": "0.00",
"Fees": "2,343.00",
"Category2": { "Sub21": "0.00","Sub22" : "0.00","Sub23": "0.00" }
"Invoice": "2.00"
},
</code></pre>
<p>code i tried is as below</p>
<pre><code>from openpyxl import load_workbook
from json import dumps
# Load Excel workbook
wb = load_workbook("sample.xlsx")
# Choose a specific sheet
sheet = wb["Sheet1"]
# Find the number of rows and columns in the sheet
rows = sheet.max_row
columns = sheet.max_column
# List to store all rows as dictionaries
lst = []
# Iterate over rows and columns to extract data
for i in range(1, rows):
row = {}
for j in range(1, columns):
column_name = sheet.cell(row=1, column=j)
row_data = sheet.cell(row=i+1, column=j)
row.update(
{
column_name.value: row_data.value
}
)
lst.append(row)
# Convert extracted data into JSON format
json_data = dumps(lst)
# Print the JSON data
print(json_data)
</code></pre>
<p>Output I am getting is as below</p>
<pre><code>[{
"ID": null,
"Label": null,
"Category 1": "Sub3",
"Order Amount": null,
"Penalty": null,
"Fees": null,
"Category2": "Sub23"
},
{"ID": 1234, "Label": "XYZ", "Category 1": 0, "Order Amount": 234, "Penalty": 111, "Fees": 3456, "Category2": 0},
{"ID": 1235, "Label": "XYZ", "Category 1": 0, "Order Amount": 111, "Penalty": 0, "Fees": 2343, "Category2": 0}]
</code></pre>
<p>I am not able to get the nested json in the proper format that i actually require, Any help would be appreciated.</p>
|
<python><json><python-3.x><excel><openpyxl>
|
2025-10-23 17:37:02
| 1
| 760
|
JagaSrik
|
79,798,021
| 19,198,552
|
Why is the order of my tkinter windows changed, when the "last" window executes another tkinter program by popen?
|
<p>I have an tkinter application with several windows, which often are placed over each other in a window stack, so that only a topmost window is visible. In this topmost window I can press a button which causes one of the not visible windows to run popen. The process which is started is also a tkinter application and is finished automatically some milliseconds later. After this, without any other action by the user, the topmost window is moved behind the second window of my window stack and the second window gets the topmost window.</p>
<p>Why is that? How can I keep the topmost window at top?</p>
<p>With my example code you have to create several windows by the "new window" button. Then move all windows over each other in the order the window number gives. Put the window with number 1 as topmost window. Then press "run in last window" and a new different window will show. Press there the "exit" Button and then the window with number 2 will jump into foreground.</p>
<pre><code>import tkinter as tk
import subprocess
all_windows = []
count = 1
class ButtonWindow(tk.Toplevel):
def __init__(self):
global count
super().__init__()
self.title(count)
count += 1
button_new = tk.Button(self, command=ButtonWindow, text="new window", width=50)
button_run = tk.Button(self, command=self.run_in_last_window, text="run in last window", width=50)
button_new.grid()
button_run.grid()
all_windows.append(self)
def run_in_last_window(self):
last_window = all_windows[-1]
last_window.run_popen()
def run_popen(self):
print("run popen in", self)
command_array = ["Py", "-c", "import tkinter as tk; root=tk.Tk();b=tk.Button(root,text='exit',command=exit);b.grid();root.mainloop()"]
process = subprocess.Popen(command_array, text=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in process.stdout:
print("line =", line)
process.wait()
root = tk.Tk()
root.withdraw()
ButtonWindow()
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2025-10-23 16:44:45
| 1
| 729
|
Matthias Schweikart
|
79,798,002
| 9,801,811
|
Why GenPareto from Scipy and Tensorflow-Probability show difference?
|
<p>I'm trying to understand why the Generalized Pareto distribution shows different results for the same parameters. The results from SciPy make sense, while the results from TensorFlow Probability do not.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import genpareto
import tensorflow_probability as tfp
tfd = tfp.distributions
c = 0.118 # concentration
loc = 17.402
scale = 37.613
gpd_1 = tfd.GeneralizedPareto(loc= loc, scale=scale, concentration=c)
gpd_2 = genpareto(c, loc=loc, scale=scale)
x = np.linspace(0, 100, 200)
# Create first figure with two subplots
fig1, (ax1, ax2) = plt.subplots(1, 2, figsize=(10,5))
ax1.plot(x, gpd_1.prob(x), label="PDF Source Dist.", color='C0', linewidth=5, alpha=0.5)
ax1.set_title('PDF Source Dist. (GenPareto tpd.GeneralizedPareto)')
ax1.legend()
ax2.plot(x, gpd_1.cdf(x), label="CDF Source Dist.", color='C0', linewidth=5, alpha=0.5)
ax2.set_title('CDF Source Dist. (GenPareto tpd.GeneralizedPareto)')
ax2.legend()
fig1.suptitle('GenPareto from tpf.GeneralizedPareto concentration=0.118, loc=17.402, scale=37.613')
fig1.tight_layout(rect=[0, 0, 1, 0.95]) # Avoid title overlap
# Create second figure with two subplots
fig2, (ax1_2, ax2_2) = plt.subplots(1, 2, figsize=(10,5))
ax1_2.plot(x, gpd_2.pdf(x), label="PDF Source Dist.", color='C0', linewidth=5, alpha=0.5)
ax1_2.set_title('PDF Source Dist. (GenPareto Scipy)')
ax1_2.legend()
ax2_2.plot(x, gpd_2.cdf(x), label="CDF Source Dist.", color='C0', linewidth=5, alpha=0.5)
ax2_2.set_title('CDF Source Dist. (GenPareto Scipy)')
ax2_2.legend()
fig2.suptitle('GenPareto from Scipy C=0.118, loc=17.402, scale=37.613')
fig2.tight_layout(rect=[0, 0, 1, 0.95])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/51QnLj5H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51QnLj5H.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><scipy><tensorflow-probability>
|
2025-10-23 16:10:19
| 1
| 447
|
PPR
|
79,797,891
| 1,538,258
|
Apache Beam 2.68.0 throws "Using fallback deterministic coder for type" warning
|
<p>In the latest Apache Beam 2.68.0, they have changed the behavior of Coders for non-primitive objects. (see the changelog <a href="https://beam.apache.org/blog/beam-2.68.0/#breaking-changes" rel="nofollow noreferrer">here</a>).</p>
<p>Therefore, I get a warning like this on GCP Dataflow.</p>
<pre><code>"Using fallback deterministic coder for type '<class 'shared_types.pubsub.MessageKey'>'
in 'Run Pipeline/Select latest per Key/CombinePerKey(LatestCombineFn)/GroupByKey'. "
</code></pre>
<p>This warning is explicitly mentioned in their “Breaking changes” section as well.</p>
<blockquote>
<p>(Python) The deterministic fallback coder for complex types like NamedTuple, Enum, and dataclasses now uses cloudpickle instead of dill. If your pipeline is affected, you may see a warning like: “Using fallback deterministic coder for type X…”. You can revert to the previous behavior by using the pipeline option --update_compatibility_version=2.67.0 (<a href="https://github.com/apache/beam/pull/35725" rel="nofollow noreferrer">35725</a>). Report any pickling related issues to <a href="https://github.com/apache/beam/issues/34903" rel="nofollow noreferrer">#34903</a></p>
</blockquote>
<p>Their suggestion is to pass <code>--update_compatibility_version=2.67.0</code> option to the Dataflow job.</p>
<p><strong>But, adding this option to the Dataflow job doesn't hide the warning!!!</strong></p>
<p>I'm not sure why it's happening, but I would like to know why.</p>
<p>Most importantly, <strong>I want to know how to tackle this properly.</strong></p>
<p>A portion of the code that's responsible looks like this.</p>
<pre class="lang-py prettyprint-override"><code>>> beam.WithKeys(lambda rec: rec.key).with_output_types((Tuple[MessageKey, Message]))
| "Window Input" >> beam.WindowInto(window.FixedWindows(60))
| "Select latest per Key" >> beam.combiners.Latest.PerKey() # <-- this is the culprit
| "Strip key" >> beam.Values()
</code></pre>
<p>I also added type hinting (using <code>.with_output_types((Tuple[MessageKey, Message]))</code> on the above line, but it still throws the same warning. But I don't think it's needed because <code>PerKey</code> from beam already defines the input and output types.</p>
<pre class="lang-py prettyprint-override"><code>@with_input_types(tuple[K, V])
@with_output_types(tuple[K, V])
class PerKey(ptransform.PTransform):
...
</code></pre>
<p><code>MessageKey</code> is just a derivation of <code>BaseModel </code>pydantic`.</p>
<pre class="lang-py prettyprint-override"><code>class MessageKey(BaseModel):
...
</code></pre>
<p>Then, I created a custom coder:</p>
<pre class="lang-py prettyprint-override"><code>class MessageKeyCoder(Coder):
def encode(self, value: MessageKey) -> bytes:
return json.dumps(value.model_dump(), sort_keys=True).encode("utf-8")
def decode(self, encoded: bytes) -> MessageKey:
data = json.loads(encoded.decode("utf-8"))
return MessageKey(**data)
def is_deterministic(self) -> bool:
return True
def estimate_size(self, value: MessageKey) -> int:
return len(self.encode(value))
</code></pre>
<p>I registered this using:</p>
<pre class="lang-py prettyprint-override"><code>beam.coders.registry.register_coder(MessageKey, MessageKeyCoder)
</code></pre>
<p>I tried to put this different places in the code, but nothing resolved the warning.</p>
<ul>
<li>Added in the same file as <code>MessageKeyCoder</code>, just below it.</li>
<li>Added right after all the imports in the file where the <code>pipeline</code> is defined.</li>
<li>Added inside the context manager of <code>with Pipeline(...) as p</code>.</li>
</ul>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
|
2025-10-23 14:00:00
| 1
| 2,088
|
Praneeth Peiris
|
79,797,685
| 12,415,855
|
Creating a screenshot of the full screen using selenium in headless mode?
|
<p>I try to create a screenshot of the full screen in selenium headless-mode using the following code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
import time
options = Options()
options.add_argument('--headless=new')
options.add_argument('--window-size=1920x1080')
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
driver.get ("https://www.orf.at/")
time.sleep(2)
driver.save_screenshot("screen.png")
</code></pre>
<p>But the created screenshot is only the very upper left part of the screen:</p>
<p><a href="https://i.sstatic.net/3GNjC7zl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GNjC7zl.png" alt="enter image description here" /></a></p>
<p>And this is how the site looks like on my 1920x1080 screen:</p>
<p><a href="https://i.sstatic.net/YjqgCbix.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjqgCbix.png" alt="enter image description here" /></a></p>
<p>How can i get the full screenshot using selenium in headless mode?</p>
|
<python><selenium-webdriver>
|
2025-10-23 10:40:19
| 1
| 1,515
|
Rapid1898
|
79,797,676
| 14,649,310
|
Error `set local ivfflat.probes` when using vecs query from supabase
|
<p>I am trying to use the <code>vecs</code> library from supabase (see github and docs pages).</p>
<p>I have created a collection and upserted some documents using an adapter and HNSW index like this:</p>
<pre><code>docs = vx.get_or_create_collection(
name="my_embeddings",
adapter=embedding_adapter,
)
docs.upsert(records=records)
docs.create_index(
method=IndexMethod.hnsw,
measure=IndexMeasure.cosine_distance,
index_arguments=IndexArgsHNSW(m=16, ef_construction=64),
)
</code></pre>
<p>Then in a different script I am trying to query this collection like this:</p>
<pre><code>collection = vx.get_or_create_collection(
name="my_embeddings",
adapter=embedding_adapter,
)
results = collection.query(
data=query,
limit=top_k,
ef_search=200, # HNSW parameter
skip_adapter=False, # use adapter to convert text -> vector
include_metadata=True,
include_value=True,
)
</code></pre>
<p>and I get this error:</p>
<pre><code>sqlalchemy.exc.DatabaseError: (pg8000.exceptions.DatabaseError) {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "$1"', 'P': '28', 'F': 'scan.l', 'L': '1240', 'R': 'scanner_yyerror'}
[SQL: set local ivfflat.probes = %s::INTEGER]
[parameters: (10,)]
</code></pre>
<p>it seems it is trying to se something about ivfflat indexing but i dont want to use that i have already indexed the table with HNSW. I dont know how to stop this error. In the documentation they seem to do exactly what I do, for example:</p>
<pre><code>docs = vx.get_or_create_collection(name="docs", dimension=3)
# add records to the collection
docs.upsert(
vectors=[
(
"vec0", # the records user defined identifier
[0.1, 0.2, 0.3], # the vector. A list or np.array
{"year": 1973} # associated metadata
)
]
)
docs.index()
docs.query(
query_vector=[0.10,0.21,0.29], # required
limit=1, # (optional) number of records to return
filters={"year": {"$eq": 1973}}, # (optional) metadata filters
measure="cosine_distance", # (optional) distance measure to use
include_value=False, # (optional) should distance measure values be returned?
include_metadata=False, # (optional) should record metadata be returned?
)
</code></pre>
|
<python><supabase><pgvector>
|
2025-10-23 10:29:53
| 0
| 4,999
|
KZiovas
|
79,797,669
| 1,823,822
|
How can I remote-debug a Python app running in Docker from PyCharm Professional?
|
<p>I’m trying to remote-debug a Python service that runs inside a Docker container using PyCharm 2025.2.3 on macOS.</p>
<p>I do have an active license, but the IDE no longer shows any obvious way to attach to a running <code>debugpy</code> session.</p>
<p><strong>Environment</strong></p>
<p>macOS Sequoia
PyCharm 2025.2 (licensed)
Docker Desktop 4.x
Python 3.10 in container
debugpy 1.8.x</p>
<p><strong>What I’m doing</strong></p>
<p>Inside the container I start the service like this:</p>
<pre><code>python -m debugpy --listen 0.0.0.0:5678 --wait-for-client devenv/bin/app.py --port 6000 # (it seems like hung forever)
</code></pre>
<p>and my <code>docker-compose.yml</code> includes:</p>
<p>ports:</p>
<ul>
<li>"6000:6000"</li>
<li>"5678:5678"</li>
</ul>
<p>From my Mac I can confirm that the port is open:</p>
<pre><code>nc -vz localhost 5678
# Connection succeeded
</code></pre>
<p>So the network part works.</p>
<p><strong>What actually happens</strong></p>
<p>Neither <code>Attach to debugpy</code> nor <code>Attach to Process</code> appears in the list of run configurations for Pycharm.</p>
<p>The only thing available is Python Debug Server. I have specified the IDE host name as: <code>0.0.0.0</code>, port as <code>5678</code> and added path mapping. But while I am trying to start debugging, it keeps saying <code>"Address already in use"</code>.</p>
<p><strong>My questions</strong></p>
<p>Did Pycharm remove “Attach to debugpy” / “Attach to Process” in PyCharm 2025.2?</p>
<p>If not, how can I enable or restore the missing remote-attach workflow?</p>
<p>Is there a new supported way to attach to debugpy inside Docker without using the Docker Interpreter?</p>
<p>Any known work-arounds (plugins, registry settings, manual templates)?</p>
<p><strong>EDIT</strong></p>
<p>I’ve already configured the Python debug server in my PyCharm IDE (screenshot below):</p>
<p><a href="https://i.sstatic.net/M2GWtzpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M2GWtzpB.png" alt="enter image description here" /></a></p>
<p>But, whenever I am clicking Debug button, I am getting error as Address already in Use.</p>
<p>I’m not sure what’s causing this conflict or what the recommended steps are to resolve it.</p>
<p>Could someone please outline the exact steps required to properly configure and start the remote debug server (for example, how to run the app.py inside container, docker-compose changes required (if any), address and port to mention in Pycharm debug server configuration)?</p>
<p>Any detailed guidance would be greatly appreciated.</p>
|
<python><docker><pycharm><remote-debugging><debugpy>
|
2025-10-23 10:25:08
| 1
| 4,553
|
Joy
|
79,797,647
| 1,447,207
|
Evaluating pre-trained random forest in Fortran
|
<p>I have a trained random forest regressor from scikit-learn:</p>
<p><a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html</a></p>
<p>I then want to make use of (but not train further) this regressor in an existing program that is written in a combination of Fortran and C++. Is there any existing reasonably simple way of evaluating a pre-trained random forest in either Fortran or C++?</p>
<p>Otherwise I guess one could use code-generation to specifically generate code that implements the trained forest with hard-coded parameters in the generated code.</p>
|
<python><c++><fortran><random-forest>
|
2025-10-23 09:59:50
| 0
| 803
|
Tor
|
79,797,587
| 848,475
|
How to simulate request timeout and then succeed with requests_mock
|
<p>I use <a href="https://github.com/jamielennox/requests-mock" rel="nofollow noreferrer">requests_mock</a> with <a href="https://docs.pytest.org/en/stable/" rel="nofollow noreferrer">pytest</a>. I would like to configure a <code>requests_mock</code> object to raise an exception (<code>requests.exceptions.ReadTimeout</code>) a few times and then succeed. How can I do that?</p>
<p>I know how to return various HTTP requests (pass a list to <code>response_list</code>) and how to always raise an exception (using the <code>exc</code> argument) but I don't know how to combine this.</p>
<p>To give some context, I want to test a function that retry on some errors, including <code>requests.exceptions.ReadTimeout</code>, thanks to <a href="https://github.com/jd/tenacity" rel="nofollow noreferrer">Tenacity</a>. The goal is to test that the function re-run on a few exceptions and then succeed.</p>
<p>Basically, this is similar to <a href="https://stackoverflow.com/questions/31547758/node-js-nock-simulate-request-timeout-and-subsequent-success">this question</a> but in Python.</p>
|
<python><pytest><requests-mock>
|
2025-10-23 09:02:30
| 1
| 872
|
Mathieu Dubois
|
79,797,494
| 1,844,397
|
Narrowing dict to TypedDict
|
<p>I want to narrow an unambiguously defined <code>dict[...]</code> type in a superclass to a specific <code>TypedDict</code> in an inheriting class but I cannot figure out a way to specify a dict-based supertype that the <code>TypedDict</code> can get assigned to. I'm using pylance (standard).</p>
<p>The following works fine:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict
Base_t = TypedDict('Base_t', {})
class Base[T: Base_t]:
def get(self) -> T:
...
class Implementation_t(TypedDict):
one: str
two: int
class Implementation[U: Implementation_t](Base[U]):
def get(self) -> U:
...
</code></pre>
<p>However, if I want to specify <code>Base_t</code> more narrowly (i.e. specify the types of the map), along the idea of:</p>
<pre class="lang-py prettyprint-override"><code>type Base_t = dict[str, str|int]
</code></pre>
<p>the type checker fails assigning <code>Implementation_t</code> to <code>Base_t</code>. I need a way to define a supertype to the <code>TypedDict</code> <code>Implementation_t</code> that specifies key & value types but not any specific fields.</p>
<p><strong>N.B.:</strong> Naturally, <code>Implementation_t</code>'s key and value types are subsets of <code>Base_t</code>'s, respectively.</p>
<p>Is there a nice way to connotate this?</p>
|
<python><generics><python-typing><typeddict>
|
2025-10-23 07:07:58
| 0
| 1,744
|
bossi
|
79,797,461
| 4,994,781
|
Python packaging: single-module with package data
|
<p>I want to install my single file module together with its <code>py.typed</code> file using <code>setuptools</code> and <code>pyproject.toml</code>.</p>
<p>I'm packaging a Python single-module, and so far I was using a very simple <code>pyproject.toml</code>:</p>
<pre class="lang-toml prettyprint-override"><code>[build-system]
requires = ["setuptools==80.9.0"]
build-backend = "setuptools.build_meta"
[project]
name = "sample"
version = 1.0.0
description = "Sample single-module"
</code></pre>
<p>Project repository layout is quite simple:</p>
<pre class="lang-none prettyprint-override"><code>sample
├── pyproject.toml
└── sample.py
</code></pre>
<p>The module is installed at the root of <code>site-packages</code> and I can use it like:</p>
<pre><code>from sample import whatever
</code></pre>
<p>The problem is, I want to provide a <code>py.typed</code> for this module, so the new repository layout is this:</p>
<pre class="lang-none prettyprint-override"><code>sample
├── pyproject.toml
├── py.typed
└── sample.py
</code></pre>
<p>and the new <code>pyproject.toml</code> reads like this:</p>
<pre class="lang-toml prettyprint-override"><code>[build-system]
requires = ["setuptools==80.9.0"]
build-backend = "setuptools.build_meta"
[project]
name = "sample"
version = 1.0.0
description = "Sample single-module"
[tool.setuptools]
include-package-data = true
package-data = {"sample" = ["py.typed"]}
</code></pre>
<p>Of course, <code>setuptools</code> still installs the module at the root of <code>sitepackages</code> and does not install data file <code>py.typed</code>. I was expecting this, and I did not find a clean solution for this, so I switched to a different repository layout, with a package and a module, like this:</p>
<pre class="lang-none prettyprint-override"><code>sample
├── __init__.py
├── pyproject.toml
└── sample
├── __init__.py
├── py.typed
└── sample.py
</code></pre>
<p>This works, but forces me to use the module as <code>import sample.sample</code> or <code>from sample import sample</code>, and I don't want this.</p>
<p>Is there alternative for:</p>
<ul>
<li>Having a direct import, no package namespace.</li>
<li>Having package data installed.</li>
<li>Avoiding a module subdirectory (not essential).</li>
</ul>
<p>I know about using <code>__init__.py</code> to import the module, so when I import <code>sample</code>, <code>sample.sample</code> is actually imported, but I'm curious about alternatives.</p>
|
<python><setuptools><python-module><python-packaging>
|
2025-10-23 06:30:44
| 1
| 580
|
Raúl Núñez de Arenas Coronado
|
79,797,328
| 7,867,195
|
aiosqlite and managing transactions
|
<p>aiosqlite's own documentation is extremely minimal so I would appreciate help from experts.</p>
<p>I have a chat client project and the backend is in Python/FastAPI, with all the relevant calls being <code>async def</code> and subsequent calls between modules all using async/await.</p>
<p>I need to use sqlite as the storage backend, necessitating the use of aiosqlite.</p>
<p>I need to ensure that all write transactions are serialized; sqlite, as far as I understand, does not do parallel writing properly. So if one coroutine (as I understand they're not exactly threads with async?) is writing, the other will have to wait until that writing is complete. ("Does not scale" does not matter, I'll do a Postgres version later). Or does sqlite actually do parallel independent write transactions fine now?</p>
<p>And, of course, I also need to ensure atomic write transactions, so that if something fails the transaction is rolled back.</p>
<p>Just how do I do this with aiosqlite? Create a new connection for every transaction, or keep a shared connection? Call some explicit "begin" before starting the writes, or not? Commit/rollback explicitly or expect automatic commit on leaving the scope/rollback on unhandles exception?</p>
|
<python><database><sqlite><transactions>
|
2025-10-23 01:21:15
| 1
| 1,115
|
Mikhail Ramendik
|
79,797,134
| 14,649,310
|
How to create a vecs client using a Google Cloud connector based SQLAlchemy engine
|
<p>I am trying to use vecs this <a href="https://supabase.github.io/vecs/" rel="nofollow noreferrer">library here</a> and <a href="https://github.com/supabase/vecs" rel="nofollow noreferrer">GitHub page</a> but I have a big issue. When trying to create a vecs client it needs <em>only</em> the connection string <code>Client(connection_string="postgres_connection_string_here")</code></p>
<p>However my Postgres is deployed in Google Cloud and to connect to it I create a Google Cloud connector and with that an SQLAlchemy engine:</p>
<pre><code>from google.cloud.sql.connector import Connector
connector = Connector()
try:
# Creator function for SQLAlchemy engine
def getconn():
return connector.connect(
CLOUD_SQL_INSTANCE,
"pg8000",
user=DATABASE_USER,
password=DATABASE_PASSWORD,
db=DATABASE_NAME,
)
# Create SQLAlchemy engine
engine = create_engine("postgresql+pg8000://", creator=getconn)
</code></pre>
<p>this works for my SQLAlchemy DB engine and I can do transaction succesfully.</p>
<p>However <code>vecs</code> does not allow to create a client by passing the creator for the engine nor a ready-made SQLAlchemy engine. It seems to only accept the connection string.</p>
<p>I monkeypatched the <code>__init__</code> method of the Client class of <code>vecs</code> and it worked:</p>
<pre><code>from sqlalchemy import text
from vecs import Client
from books_vector_db.utils.db import engine
def _patched_init(self, engine):
self.engine = engine
from sqlalchemy import MetaData
from sqlalchemy.orm import sessionmaker
self.meta = MetaData(schema="vecs")
self.Session = sessionmaker(self.engine)
with self.Session() as sess:
with sess.begin():
sess.execute(text("create schema if not exists vecs;"))
sess.execute(text("create extension if not exists vector;"))
self.vector_version: str = sess.execute(
text(
"select installed_version from pg_available_extensions where name = 'vector' limit 1;"
)
).scalar_one()
Client.__init__ = _patched_init
def get_vecs_client() -> Client:
return Client(engine)
</code></pre>
<p>but I can't have in our project. Any ideas how I can create a <code>vecs</code> client to my GCP Postgres DB without hacking the library? I want to still be able to use the Google client connector because, if I am not mistaken, this is the recommended and safest way to do it.</p>
<p>Any ideas how to either use the ready made SQLAlchemy engine without monkeypatching the <code>vecs</code> library or a more legit way to connect <code>vecs</code> client to my GCP database?</p>
|
<python><sqlalchemy><google-cloud-sql><supabase><pgvector>
|
2025-10-22 18:36:16
| 0
| 4,999
|
KZiovas
|
79,797,060
| 2,092,870
|
How to connect to Notion's remote MCP server from your own MCP client?
|
<p>I'm experimenting with Pydantic AI as agentic framework and want to use <a href="https://developers.notion.com/docs/mcp" rel="nofollow noreferrer">Notion's remote MCP server</a> as tool for my agent.</p>
<p>Remote server's endpoint seems to require OAuth token to be accessed, but I didn't find any information how to obtain such a token from Notion.</p>
<p>If I just plug in the MCP server to Pydantic AI as suggested by the Pydantic's docs it (as expected) doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>notion_mcp = MCPServerStreamableHTTP('https://mcp.notion.com/mcp')
agent = Agent(
name="Notion Assistant Chat Agent",
model=model,
output_type=str,
instrument=True,
toolsets=[notion_mcp]
)
# Run the agent...
</code></pre>
<p>Error:</p>
<pre><code> +-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "pai-test/.venv/lib/python3.12/site-packages/mcp/client/sse.py", line 66, in sse_client
| event_source.response.raise_for_status()
| File "pai-test/.venv/lib/python3.12/site-packages/httpx/_models.py", line 829, in raise_for_status
| raise HTTPStatusError(message, request=request, response=self)
| httpx.HTTPStatusError: Client error '401 Unauthorized' for url 'https://mcp.notion.com/mcp'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
</code></pre>
<p>How do I initiate OAuth flow for Notion's remote MCP server to ultimately be able to connect to it from Pydantic's client?</p>
|
<python><pydantic><notion-api><model-context-protocol>
|
2025-10-22 16:53:03
| 1
| 5,532
|
Krešimir Nesek
|
79,796,815
| 1,195,207
|
Python jsonschema validation always succeeds
|
<p>I am trying to migrate an application away from jsonschema.RefResolver, due to these deprecation messages when testing:</p>
<pre><code>jsonschema.RefResolver is deprecated
as of v4.18.0, in favor of the https://github.com/python-jsonschema/referencing library
</code></pre>
<p>Here is an example of the current code:</p>
<pre><code>investigation_schema_path=os.path.join(
BASE_DIR, "..", "resources", "schemas", base_schemas_dir, "core", "investigation_schema.json"
)
with open(investigation_schema_path) as fp
investigation_schema = json.load(fp)
# Code below is uses the deprecated jsonschema.RefResolver
resolver = RefResolver("file://" + investigation_schema_path, investigation_schema)
validator = Draft4Validator(investigation_schema, resolver=resolver)
validator.validate(json_to_validate)
</code></pre>
<p>The code above validates all test data correctly.</p>
<p>Based on reading <a href="https://python-jsonschema.readthedocs.io/en/latest/referencing/" rel="nofollow noreferrer">https://python-jsonschema.readthedocs.io/en/latest/referencing/</a> I have replaced the deprecated code with:</p>
<pre><code> schema = Resource.from_contents(investigation_schema)
registry = schema @ Registry()
validator = Draft4Validator(schema, registry)
validator.validate(json_to_validate)
</code></pre>
<p>The result of this is that any test which should detect incorrect <code>json_to_validate</code> fails as no errors are reported: <code>{'errors': [], 'warnings': [], 'validation_finished': True}</code></p>
<p>The tests which parse correct json_to_validate still appear to pass.</p>
<p>Presumably I have misunderstood how to use the new referencing library. Does anyone have any suggestions?</p>
|
<python><jsonschema>
|
2025-10-22 13:18:43
| 1
| 2,949
|
knirirr
|
79,796,574
| 1,157,814
|
How to prevent VS Code Pylance extension from analyzing opened editor files which are outside my project workspace?
|
<p>Note: this is not a duplicate of <a href="https://stackoverflow.com/q/78512479/1157814">How to prevent Pylance and pylint in vscode from analyzing python files not in current workspace?</a> because <strong>this question is not about file path based exclusion</strong>, instead the behavior of the extension...</p>
<p><strong>What I would like to achieve</strong></p>
<p>I would like Pylance to analyze <strong>only</strong> my project files in workspace-wide mode, regardless of what files are open in the editor. When I open external files (e.g., viewing dependency sources via Ctrl+Click, or comparing git commit diffs), Pylance analyzes those files too and displays their errors in the PROBLEMS pane.
My observation is that this is the behavior of the extension itself, when a file opened in the editor it analyses it <em>unconditionally</em> and completely ignores the exclude rules.</p>
<p><strong>Current behavior</strong></p>
<p>Pylance analyzes <strong>every Python file opened in the editor</strong>, even if it's:</p>
<ul>
<li>Outside my project directory (e.g., files in <code>site-packages</code> from my venv)</li>
<li><strong>Virtual documents</strong> (e.g., git diff comparison panes, of file history panes)</li>
<li>Files from external packages I'm just browsing</li>
</ul>
<p>What is worst, these diagnostics merged and appear in the PROBLEMS pane mixed with my actual project issues, forcing me to close all external files to see the real state of project issues. When I close those editor windows, those issues removed from the PROBLEMS window and the behavior is normal: a project wide analysis result.</p>
<p><strong>What I've already configured</strong></p>
<p><strong><code>.vscode/settings.json</code>:</strong></p>
<pre class="lang-json prettyprint-override"><code>{
"python.analysis.diagnosticMode": "workspace",
// ... other settings
}
</code></pre>
<p><strong><code>pyproject.toml</code>:</strong></p>
<pre class="lang-ini prettyprint-override"><code>[tool.pyright]
include = ["."]
exclude = [
".stdev/**",
"**/__pycache__",
".git",
"**/build",
"env/**",
"**/venv/**",
"**/.venv/**",
"**/.env/**",
"**/.tox/**",
"**/.mypy_cache/**",
"**/.pytest_cache/**"
]
diagnosticMode = "workspace"
typeCheckingMode = "standard"
pythonVersion = "3.13"
pythonPlatform = "Windows"
</code></pre>
<p><strong>Question</strong></p>
<p>Is there a Pylance extension setting or Pyright configuration that tells Pylance to ignore diagnostics from files outside the workspace <code>include</code> paths, <strong>even when those files are open in the editor?</strong></p>
<p>I've tried various <code>include</code>/<code>exclude</code> configurations in <code>pyproject.toml</code>, but Pylance still analyzes any Python file opened in the editor regardless of its location.</p>
<p><strong>Environment:</strong></p>
<ul>
<li>VS Code 1.105.1 with Pylance extension (ms-python.vscode-pylance 2025.8.3)</li>
<li>Python 3.13</li>
<li>Windows 11</li>
<li>Project uses <code>pyproject.toml</code> for configuration (prefer not to use separate <code>pyrightconfig.json</code>)</li>
</ul>
|
<python><pylance>
|
2025-10-22 08:46:38
| 0
| 36,479
|
g.pickardou
|
79,796,495
| 12,569,596
|
VSCode debugpy launcher command not executing automatically in terminal after shell initialization commands
|
<h2>Problem</h2>
<p>When launching a Python debug session in VSCode using debugpy, I see three commands spawned in the integrated terminal:</p>
<ol>
<li><code>devbox shell</code> - executes automatically ✓</li>
<li><code>source /path/to/project/.venv/bin/activate</code> - executes automatically ✓</li>
<li><code>/usr/bin/env /path/to/project/.venv/bin/python /path/to/debugpy/launcher <port> -- -m mymodule <args></code> - does NOT execute automatically ✗</li>
</ol>
<p>The first two commands run fine, but the third command (the actual debugpy launcher) just appears in the terminal without executing, causing a "Timed out waiting for launcher to connect" error. I have to manually copy and paste the command to run it.</p>
<h2>Environment</h2>
<ul>
<li><strong>OS:</strong> macOS (Apple Silicon)</li>
<li><strong>VSCode Version:</strong> Latest</li>
<li><strong>Python Extension:</strong> Latest</li>
<li><strong>Python Version:</strong> 3.12</li>
<li><strong>Environment Manager:</strong> devbox + venv</li>
</ul>
<h2>Configuration</h2>
<p><strong>launch.json:</strong></p>
<pre class="lang-json prettyprint-override"><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Run worker",
"type": "debugpy",
"request": "launch",
"preLaunchTask": "Start dev cluster",
"module": "mymodule",
"args": ["worker", "--task-queue", "${input:task-queue}"],
"env": {
"STAGE": "${input:stage}"
},
"console": "integratedTerminal"
}
]
}
</code></pre>
<p><strong>settings.json:</strong></p>
<pre class="lang-json prettyprint-override"><code>{
"python.defaultInterpreterPath": "${workspaceFolder}/.venv/bin/python",
"python.terminal.activateEnvironment": false,
"terminal.integrated.automationProfile.osx": {
"path": "/opt/homebrew/bin/zsh",
"args": ["-l"]
}
}
</code></pre>
<h2>What I've Tried</h2>
<ol>
<li>✗ Added <code>"console": "integratedTerminal"</code> to the launch configuration</li>
<li>✗ Set <code>"python.terminal.activateEnvironment": false</code></li>
<li>✗ Configured terminal automation profile with login shell</li>
</ol>
<p>None of these resolved the issue.</p>
<h2>Additional Observation</h2>
<p>I noticed that in the terminal output, the third command (debugpy launcher) has a leading space before it, while the first two commands don't:</p>
<pre><code>devbox shell
source /path/to/.venv/bin/activate
/usr/bin/env /path/to/.venv/bin/python /path/to/debugpy/launcher <port> -- -m mymodule <args>
</code></pre>
<p>Notice the space before the third command (<code> /usr/bin/env</code>). I'm not sure if this is related to the execution issue, but it's worth noting as it's the only command that fails to auto-execute.</p>
<p>Could this leading space be preventing the command from executing automatically, or is it just a symptom of how VSCode is sending the command to the terminal?</p>
<h2>Question</h2>
<p>Why is the debugpy launcher command not executing automatically when the previous shell initialization commands run fine? Is this related to how VSCode chains commands in nested shells, or is there a configuration I'm missing to ensure all three commands execute in sequence?</p>
<p>The commands appear to be sent to the terminal, but only the first two actually execute. What's the proper way to configure VSCode to execute the debugpy command in a terminal that has already run initialization commands?</p>
|
<python><shell><visual-studio-code><debugging><debugpy>
|
2025-10-22 07:12:07
| 0
| 3,005
|
kennysliding
|
79,796,429
| 1,867,328
|
Difference between 2 dates in months in Python
|
<p>I have 2 dates in YYYY-MM format</p>
<p>FirstDate = '2008-02'</p>
<p>SecondDate = '2022-12'</p>
<p>I would like to calculate the number of months between these 2 dates, reported to be as integer. Is there any direct function/method available in Python to calculate the same?</p>
|
<python><python-3.x>
|
2025-10-22 06:09:43
| 1
| 3,832
|
Bogaso
|
79,796,336
| 1,799,323
|
Seaborn pairplot with one dataset having only NaN values leads to unexpected behavior
|
<p>I have two datasets I'd like to plot in a single corner plot. In some instances, one of the datasets may be empty, but I'd still like the legend to show the keys for both datasets. I thought setting the empty dataset to <code>np.nan</code> values would accomplish this, but instead the dataset with <code>np.nan</code> values somehow plots a single real-valued data point.</p>
<p>Here is an MWE with the output. How can I remove whatever this ghost point is while still keeping the legend as is?</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
d = {r'x': [np.nan], r'y': [np.nan], r'z': [np.nan] }
df1 = pd.DataFrame(data=d)
d = {r'x': np.random.rand(100), r'y': np.random.rand(100), r'z': np.random.rand(100) }
df2 = pd.DataFrame(data=d)
df1['status'] = 'Success'
df2['status'] = 'Fail'
df12 = pd.concat([df2, df1])
pp = sns.pairplot(df12, hue='status', corner=True, diag_kind='hist', diag_kws={'common_norm': False, 'stat': 'probability'})
pp._legend.set_bbox_to_anchor((0.95, 0.75))
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Qs4SCKCn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qs4SCKCn.png" alt="enter image description here" /></a></p>
|
<python><seaborn>
|
2025-10-22 01:23:47
| 1
| 679
|
user1799323
|
79,796,094
| 14,566,295
|
Creating a new pandas dataframe from shape
|
<p>I have information on total number of rows and number of columns for a new pandas dataframe</p>
<pre><code>import pandas as pd
nRow = 10
nCol = 4
</code></pre>
<p>Based on this information I want to create a new dataframe where each element will be filled by 1</p>
<p>Is there any direct method available with pandas to achieve this?</p>
|
<python><pandas>
|
2025-10-21 17:33:47
| 3
| 1,679
|
Brian Smith
|
79,795,984
| 1,332,656
|
How to Get user_id from LangGraph Metadata?
|
<p><strong>🧩 In short, how to retrieve <code>user_id</code> at graph runtime?</strong></p>
<hr />
<p>LangGraph introduced the <code>Runtime</code> class (since v0.6.0) to replace <code>RunnableConfig</code>:</p>
<p>🔗 <a href="https://github.com/langchain-ai/langgraph/releases/tag/0.6.0" rel="nofollow noreferrer">https://github.com/langchain-ai/langgraph/releases/tag/0.6.0</a></p>
<p>What’s the correct way to access <code>user_id</code> (and other metadata) from <code>Runtime</code> inside a graph or middleware?</p>
<p>The example below is adapted from the 1.0.0+ docs:</p>
<p>🔗 <a href="https://docs.langchain.com/oss/python/langchain/runtime" rel="nofollow noreferrer">https://docs.langchain.com/oss/python/langchain/runtime</a></p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from langchain.agents import create_agent, AgentState
from langchain.agents.middleware import after_model
from langgraph.runtime import Runtime
@dataclass
class Context:
user_id: str
thread_id: str
@after_model
def log_after_model(state: AgentState, runtime: Runtime[Context]) -> dict | None:
print(f"Completed request for user: {runtime.context.user_id}")
return None
agent = create_agent(
model="deepseek-chat",
tools=[],
middleware=[log_after_model],
context_schema=Context,
)
</code></pre>
<p>Running the graph with <code>langgraph dev</code> (API v1.0.1) and calling the agent via SDK (instead of <code>invoke</code>) produces this error:</p>
<pre><code>Context.__init__() missing 2 required positional arguments: 'user_id' and 'thread_id'"
</code></pre>
<p>Before v0.6.0, this could be accessed through <code>RunnableConfig</code>, but there’s no clear example in the new docs on how to do this via <code>Runtime</code>.</p>
<p>In LangSmith, both <code>user_id</code> and <code>thread_id</code> appear under Metadata, so they clearly exist in the runtime context—but it’s unclear how to retrieve them programmatically.</p>
<p>Here’s the LangSmith screenshot for reference:</p>
<p><a href="https://i.sstatic.net/2f1pspRM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2f1pspRM.png" alt="LangSmith Screenshot with Metadata" /></a></p>
|
<python><langchain><langgraph>
|
2025-10-21 15:24:41
| 0
| 2,458
|
Alpha
|
79,795,855
| 1,719,931
|
pandas crosstab with string as second parameter
|
<p>Is this code, which works, supposed to work?</p>
<pre><code>import pandas as pd
from palmerpenguins import load_penguins
penguins = load_penguins()
pd.crosstab(penguins.species, "count")
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">species</th>
<th style="text-align: right;">count</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Adelie</td>
<td style="text-align: right;">152</td>
</tr>
<tr>
<td style="text-align: left;">Chinstrap</td>
<td style="text-align: right;">68</td>
</tr>
<tr>
<td style="text-align: left;">Gentoo</td>
<td style="text-align: right;">124</td>
</tr>
</tbody>
</table></div>
<p>Looking at the <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer">documentation for <code>pd.crosstab</code></a>, a string is not even in the allowed types for the second parameter?</p>
|
<python><pandas>
|
2025-10-21 13:10:09
| 1
| 5,202
|
robertspierre
|
79,795,854
| 1,867,328
|
Obtaining cumulative product for a list elements
|
<p>I tried to calculate the cumulative products for my list elements as below. In my list, each element is two-dimensional list:</p>
<pre><code>import numpy as np
Orig_list = [[[1,2,3], [4,5,6], [7,8,9]], [[11,3,4], [22, 4, 5], [22, 5, 1]], [[7,6,7], [2,5,6], [4,6,7]], [[234, 56, 22], [44, 66, 33], [44, 66, 33]]]
Result_list = [np.nan] * 4
Result_list[0] = Orig_list[0]
for i in range(1, len(Result_list)):
Result_list[i] = (np.array(Result_list[i - 1]) @ np.array(Orig_list[i])).tolist()
</code></pre>
<p>The above works, but I am looking for cleaner and faster implementation as my original list is fairly big and each element is also a large two-dimensional list.</p>
<p>Is there anything like a more direct cumulative product function/method for above calculation?</p>
|
<python><numpy>
|
2025-10-21 13:09:43
| 1
| 3,832
|
Bogaso
|
79,795,818
| 616,460
|
Why does OpenCV's `findContours` return a 3D array?
|
<p>In OpenCV-Python (4.12.0.88), the <a href="https://docs.opencv.org/4.x/d3/dc0/group__imgproc__shape.html#gadf1ad6a0b82947fa1fe3c3d497f260e0" rel="nofollow noreferrer"><code>findContours</code> function</a> returns a 3D array.</p>
<p>The shape of the array is always <code>(k, 1, 2)</code> where the 1st dimension is the point index (<code>k</code>) and the 3rd dimension is the point (x=0, y=1), e.g. if I wanted to print a list of contour points I'd do:</p>
<pre><code>contours = cv2.findContours(image, ...)
for k in range(contours.shape[0]):
x = contours[k,0,0]
y = contours[k,0,1]
print((x, y))
</code></pre>
<p>My question is: Why is this a 3D array and what is the purpose of the second dimension? Will the second dimension ever have size greater than 1 (e.g. could <code>assert contours.shape[1]==1</code> fail)?</p>
|
<python><opencv>
|
2025-10-21 12:34:27
| 0
| 40,602
|
Jason C
|
79,795,697
| 8,028,981
|
Fortran compiler in Appveyor
|
<p>For my Pythonm project which relies on a Fortran extension, I used Appveyor for the compilation of the binary wheels under Windows.</p>
<p>To this end, I use the "Visual Studio 2019" Appveyor image and include <code>C:/msys64/mingw64/bin</code> to the path before running the setup with the command <code>python.exe setup.py build_ext --inplace --compiler=mingw32 --fcompiler=gnu95</code>.</p>
<p>This used to work in the past. However, I now get the following error:</p>
<blockquote>
<p>numpy.distutils.fcompiler.CompilerNotFound: gnu95: f90 nor f77</p>
</blockquote>
<p>According to the <a href="https://www.appveyor.com/docs/windows-images-software" rel="nofollow noreferrer">Appveyor homepage</a>, "C:/msys64" should still exist. But maybe the path to the Fortran compiler has changed, or it has been removed?</p>
<p>Any idea what I can try?</p>
<p>In the following, I post the full .appveyor.yml for completeness</p>
<pre><code>skip_non_tags: true
image:
- Visual Studio 2019
environment:
matrix:
- PYTHON: "C:/Python37-x64"
NPVERS: "1.19.3"
- PYTHON: "C:/Python38-x64"
NPVERS: "1.19.3"
- PYTHON: "C:/Python39-x64"
NPVERS: "1.19.3"
- PYTHON: "C:/Python310-x64"
NPVERS: "1.23"
MINGW_DIR: C:/msys64/mingw64/bin
LDFLAGS: "-Wl,--default-image-base-low"
TESTPYPI_USER: AmosEgel
PYPI_USER: ... my user name
TESTPYPI_PWD:
secure: ... my password
PYPI_PWD:
secure: ... my password
clone_depth: 5
init:
- cmd: set PATH=%MINGW_DIR%;%PATH%
build: off
after_test:
- "%PYTHON%\\python.exe -m pip install numpy==%NPVERS%"
- "%PYTHON%\\python.exe -m pip install wheel"
- "%PYTHON%\\python.exe setup.py prepare"
- "%PYTHON%\\python.exe setup.py build_ext --inplace --compiler=mingw32 --fcompiler=gnu95 -f"
- "%PYTHON%\\python.exe setup.py bdist_wheel"
artifacts:
- path: dist\*
deploy_script:
- "%PYTHON%\\python.exe -m pip install twine"
# - "%PYTHON%\\python.exe -m twine upload -u %TESTPYPI_USER% -p %TESTPYPI_PWD% --repository testpypi --skip-existing dist/*.whl"
- "%PYTHON%\\python.exe -m twine upload -u %PYPI_USER% -p %PYPI_PWD% --skip-existing dist/*.whl"
</code></pre>
|
<python><mingw><gfortran><msys><appveyor>
|
2025-10-21 10:14:07
| 1
| 1,240
|
Amos Egel
|
79,795,621
| 3,414,982
|
AgentWorkflow doesn't call functions when using Ollama
|
<p>I modified the example from the LlamaIndex documentation: <a href="https://developers.llamaindex.ai/typescript/framework/modules/agents/agent_workflow/#single-agent-workflow" rel="nofollow noreferrer">Single Agent Workflow Example</a> to work with a local LLM using the <code>@llamaindex/ollama</code> adapter package.</p>
<pre class="lang-js prettyprint-override"><code>import { tool } from 'llamaindex';
import { agent } from '@llamaindex/workflow';
import { ollama } from '@llamaindex/ollama';
(async () => {
// Define a joke-telling tool
const jokeTool = tool(() => 'Baby Llama is called cria', {
name: 'joke',
description: 'Use this tool to get a joke',
});
// Create an single agent workflow with the tool
const jokeAgent = agent({
tools: [jokeTool],
llm: ollama({ model: 'qwen2.5-coder:14b' }),
});
// Run the workflow
const result = await jokeAgent.run('Tell me something funny');
console.log(result.data.result); // Baby Llama is called cria
console.log('---------------------');
console.log(result.data.message); // { role: 'assistant', content: 'Baby Llama is called cria' }
})().catch(console.error);
</code></pre>
<p>The output was:</p>
<pre><code>{
"name": "joke",
"arguments": {}
}
---------------------
{
role: 'assistant',
content: '{\n "name": "joke",\n "arguments": {}\n}'
}
</code></pre>
<p>The tool wasn't actually called.</p>
<hr />
<p>I'm new in the framework, but it looked like this was similar to <a href="https://github.com/run-llama/llama_index/issues/17713" rel="nofollow noreferrer">GitHub issue #17713</a>. It seems the bug was fixed in the Python version, but not in the TypeScript version. So I tried running the equivalent code in Python using <code>llama_index</code>:</p>
<pre class="lang-py prettyprint-override"><code>from llama_index.llms.ollama import Ollama
from llama_index.core.agent.workflow import FunctionAgent
async def test():
joke_agent = FunctionAgent(
tools=[joke_tool],
llm=Ollama(model='qwen2.5-coder:14b')
# llm=Ollama(model='deepseek-coder:6.7b-instruct')
)
result = await joke_agent.run(user_msg='Tell me something funny')
print('-----------------')
print(result)
print('-----------------')
def joke_tool():
'''Use this tool to get a joke'''
return 'Baby Llama is called cria'
</code></pre>
<p>The output was similar:</p>
<pre><code>-----------------
{
"name": "joke_tool",
"arguments": {}
}
-----------------
</code></pre>
<hr />
<p>Either I do something wrong or there's a bug in ollama or ... I don't know..</p>
|
<python><typescript><artificial-intelligence><llama-index><ollama>
|
2025-10-21 08:46:28
| 0
| 405
|
user3414982
|
79,795,559
| 14,649,310
|
Hybrid search on Postgres with pgvector using vecs
|
<p>I have an instance of Postgres with the pgvector extension enabled. I want to know if I can easily perform hybrid search on my data using both a vector similarity search as well as keyword matching. Other vector databases like Vespa and Pinecone I believe offer this natively.</p>
<p>Postgres with pgvector do not offer that natively (you can combine separate lexical and semantic searches, then rerank) but I found this Python library called <strong><code>vecs</code></strong> (see here <a href="https://supabase.github.io/vecs/" rel="nofollow noreferrer">official docs</a> and <a href="https://github.com/supabase/vecs" rel="nofollow noreferrer">Github</a>). They offer a client that allows you to basically use Postgres similar to Pinecone but I cannot find how you do a hybrid search directly with this library. Does anyone know?</p>
|
<python><pgvector><vector-search><semantic-search><hybrid-search>
|
2025-10-21 07:41:20
| 1
| 4,999
|
KZiovas
|
79,795,415
| 518,004
|
How to fill in a PDF form with javascript
|
<p>I have the following PDF <a href="https://www.dco.uscg.mil/Portals/9/NMC/pdfs/forms/CG_719B.pdf" rel="nofollow noreferrer">https://www.dco.uscg.mil/Portals/9/NMC/pdfs/forms/CG_719B.pdf</a></p>
<p>I have tried a number of different ways to find access to the text boxes within code,</p>
<pre><code> async function fillAllFields() {
const file = document.getElementById('pdf-upload').files[0];
if (!file) return alert("Please upload a PDF");
const arrayBuffer = await file.arrayBuffer();
const pdfDoc = await PDFLib.PDFDocument.load(arrayBuffer);
const form = pdfDoc.getForm();
const fields = form.getFields();
fields.forEach(field => {
const name = field.getName();
try {
form.getTextField(name).setText(name);
const widgets = field.acroField.dict.get('Kids') || [field.acroField];
widgets.forEach(widget => {
const rect = widget.get('Rect');
if (rect) {
const [x1, y1, x2, y2] = rect.map(n => n.number);
console.log(`Field "${name}" at [${x1}, ${y1}, ${x2}, ${y2}]`);
}
});
} catch (e) {
console.log(`Skipping non-text field: ${name}`);
}
});
const pdfBytes = await pdfDoc.save();
const blob = new Blob([pdfBytes], { type: "application/pdf" });
const link = document.createElement("a");
link.href = URL.createObjectURL(blob);
link.download = "filled_with_names.pdf";
link.click();
}
</code></pre>
<p>However this does not give me access to the text boxes, I have tried to change it up and add text above such as {{First_name}} in the hopes that I could access this piece of text and change it however when I use PDFPlumber to extract the text it does not return it</p>
<pre><code>import pdfplumber
with pdfplumber.open("CG_719B_filled.pdf") as pdf:
for page in pdf.pages:
if page.page_number == 2:
print(page.extract_text)
print(page.extract_text())
</code></pre>
<p>So now I am checking for any kind of arcoform and it does not seem to have one.</p>
<pre><code>import pdfplumber
from pdfplumber.utils.pdfinternals import resolve_and_decode, resolve
pdf = pdfplumber.open("CG_719B_filled.pdf")
def parse_field_helper(form_data, field, prefix=None):
"""appends any PDF AcroForm field/value pairs in `field` to provided `form_data` list
if `field` has child fields, those will be parsed recursively.
"""
resolved_field = field.resolve()
field_name = ".".join(
filter(lambda x: x, [prefix, resolve_and_decode(resolved_field.get("T"))])
)
if "Kids" in resolved_field:
for kid_field in resolved_field["Kids"]:
parse_field_helper(form_data, kid_field, prefix=field_name)
if "T" in resolved_field or "TU" in resolved_field:
# "T" is a field-name, but it's sometimes absent.
# "TU" is the "alternate field name" and is often more human-readable
# your PDF may have one, the other, or both.
alternate_field_name = (
resolve_and_decode(resolved_field.get("TU"))
if resolved_field.get("TU")
else None
)
field_value = (
resolve_and_decode(resolved_field["V"]) if "V" in resolved_field else None
)
form_data.append([field_name, alternate_field_name, field_value])
form_data = []
# Check if the PDF has an AcroForm (interactive form fields)
if "AcroForm" in pdf.doc.catalog:
acro_form = resolve(pdf.doc.catalog["AcroForm"])
if "Fields" in acro_form:
fields = resolve(acro_form["Fields"])
for field in fields:
parse_field_helper(form_data, field)
print(form_data)
else:
print("PDF has AcroForm but no Fields")
else:
print("PDF does not contain an AcroForm (no interactive form fields)")
pdf.close()
</code></pre>
<p>PDF does not contain an AcroForm (no interactive form fields) :(</p>
<p>Why did I think this was gonna be so easy to populate a PDF form, I'm at a loss of what path to take, I'm almost tempted to remake the total form in something that can be quickly filled with variables that can be replaced.</p>
<p>I would appreciate if someone could explain what exactly the issue is and how I could perhaps resolve it either convert this to an Acroform with fields and then a simple way to reference and add the data or a way to recreate this form that be filled in via code.</p>
|
<javascript><python><pdf><pdfplumber>
|
2025-10-21 01:52:16
| 1
| 8,739
|
Will
|
79,795,382
| 20,167,855
|
How to add a "Copy to Clipboard" button for a Dash DataTable in Python?
|
<p>I'm building a Dash app in Python and trying to add a button that lets the user copy the table content to the clipboard so they can paste it into Excel.</p>
<p>The table displays correctly, but when I click the "Copy table" button, nothing is actually copied to the clipboard — even though the message “Table copied” appears.</p>
<p>Here's a simplified version of my code:</p>
<pre><code>
from dash import Dash, html, dash_table, dcc, Input, Output, State
import pandas as pd
app = Dash(__name__)
app.layout = html.Div([
html.Button("Copy table", id="copy_btn", n_clicks=0, className="btn btn-primary"),
dcc.Clipboard(id="clipboard_table", style={"display": "none"}),
html.Div(id="message_copy", style={"color": "green"}),
dash_table.DataTable(
id="sales_table",
columns=[
{"name": "Department", "id": "department"},
{"name": "Sales", "id": "sales"},
{"name": "Weight", "id": "weight"},
],
data=[
{"department": "Phones", "sales": 1000, "weight": "55.8%"},
{"department": "Computers", "sales": 600, "weight": "26.1%"},
],
style_table={'overflowX': 'auto'},
)
])
@app.callback(
Output("clipboard_table", "content"),
Output("message_copy", "children"),
Input("copy_btn", "n_clicks"),
State("sales_table", "data"),
prevent_initial_call=True
)
def copy_table(n_clicks, data):
if not data:
return "", "⚠️ No data to copy"
df = pd.DataFrame(data)
text = df.to_csv(sep="\t", index=False)
return text, "✅ Table copied. You can paste it in Excel (Ctrl+V)"
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
<p>I’ve tried:</p>
<p>Using <code>dcc.Clipboard()</code> as shown in the Dash docs.</p>
<p>Injecting JavaScript with <code>html.Script()</code> to use <code>navigator.clipboard.writeText(text)</code> — but that doesn't run when deployed (Render blocks inline scripts).</p>
<p>Is there a recommended or secure Dash-native way to do this, especially when the app is deployed (e.g., on Render or Heroku)?</p>
|
<python><render><plotly-dash>
|
2025-10-21 00:24:41
| 2
| 351
|
Francisco Augusto Varela Aguir
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.