QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,795,029
| 4,256,387
|
Cython memoryviews of Python object members with fused types
|
<p>I am writing cython code with cython v3.1.5.</p>
<p>I have defined these fused types:</p>
<pre><code>ctypedef fused index_t:
int32_t
int64_t
ctypedef fused value_t:
double
double complex
</code></pre>
<p>and am trying to use this code to initialize my <code>UMFFactor</code> object with a <code>scipy.sparse.csc_array</code> object:</p>
<pre><code>cdef class UMFFactor:
cdef void* _symbolic # member to be initialized with C call in __cinit__
# ... etc.
def __cinit__(self, object A):
A, use_int32, _ = validate_csc_input(A)
self._use_int32 = use_int32
self._is_real = _is_real_dtype(A.dtype)
# Compute the symbolic analysis with C function call
cdef index_t[::1] indptr = A.indptr
cdef index_t[::1] indices = A.indices
cdef value_t[::1] data = A.data
if self._is_real:
if self._use_int32:
status = umfpack_di_symbolic(
M,
N,
&indptr[0],
&indices[0],
&data[0],
&self._symbolic,
self._control.data,
self._info.data
)
else:
# ... etc. to dispatch the other C function calls.
</code></pre>
<p>but I get this compiler error:</p>
<pre><code> Error compiling Cython file:
------------------------------------------------------------
...
# Compute the symbolic analysis
cdef Py_ssize_t M = A.shape[0]
cdef Py_ssize_t N = A.shape[1]
cdef index_t[::1] indptr = A.indptr
^
------------------------------------------------------------
sksparse/umfpack.pyx:679:36: Cannot coerce to a type that is not specialized
</code></pre>
<p>I have also tried defining a C function like:</p>
<pre><code>cdef void _init_symbolic(self, index_t[::1] indptr, index_t[::1] indices, value_t[::1] data)
</code></pre>
<p>that <code>__cinit__</code> calls, but I get a similar error:</p>
<pre><code> sksparse/umfpack.pyx:681:27: no suitable method found (candidates: 4)
</code></pre>
<p>The only working method I have found is to manually define 4 functions for each type memoryview input that I need, and skip the fused types altogether.</p>
<p>Is there something I am missing about how to use fused types? Or can they not be used on arbitrary object members, even though I know those members will be C-contiguous numpy arrays with my desired fused types?</p>
|
<python><cython><suitesparse>
|
2025-10-20 15:10:49
| 1
| 404
|
Bernie Roesler
|
79,794,971
| 14,586,554
|
Fast vectorized maximal independent set greedy algorithm
|
<p>I need a really fast vectorized maximal independent set algorithm implemented in <code>pytorch</code>, so I can use it for tasks with thousands of nodes in reasonable time.</p>
<p>I cannot use <code>networkx</code>, it is way too slow for my needs.</p>
<p>I don't need an exact algorithm, a rough greedy approximation will do the job for me. It just needs to be really fast.</p>
<p>The input is a simple adjacency matrix, and the return value should be an independent set.</p>
|
<python><algorithm><pytorch>
|
2025-10-20 13:58:38
| 1
| 620
|
Kemsikov
|
79,794,935
| 110,963
|
Isolation of a custom multiprocessing manager and how to update internal state
|
<p>I try to use a <a href="https://docs.python.org/3/library/multiprocessing.html#customized-managers" rel="nofollow noreferrer">custom multiprocessing manager</a>, mostly following the example from the docs. The main difference is that my class updates internal state. It looks like this:</p>
<pre><code>class IdIndex:
def __init__(self):
self.data = set()
self.call_count = 0
self.lock = multiprocessing.Lock()
def get_new_ones(self, ids):
with self.lock:
self.call_count += 1
new_ones = ids - self.data
self.data.update(new_ones)
return new_ones
class IndexManager(BaseManager):
pass
IndexManager.register("ids", IdIndex)
</code></pre>
<p>Later I use it like this:</p>
<pre><code>with IndexManager() as index:
# pass index.ids() proxies to subprocesses
</code></pre>
<p>My understanding is, that <code>IndexManager</code> starts a new process which hosts a single instance of <code>IdIndex</code>. If I call <code>get_new_ones</code> on one of the proxy objects the call will be forwarded to the single in instance in the dedicated process and will be processed there. So there should be only one "shared" instance of <code>IdIndex</code>. Even the <code>self.lock</code> should not be necessary.</p>
<p>Based on what I observe based on detailed logging this understanding is wrong. <code>self.call_count</code> is kind of incremented, but not sequentially. It looks like there were either multiple instances of <code>IdIndex</code> or something is cached in the proxy objects. But I have a hard time putting my finger on what's really going on. If I log <code>self.call_count</code> I get something like 1,2,3,4,4,5,6,4,5,5,7,8,8,...</p>
<p>Can somebody explain what's wrong with my understanding and how to set this up, so that I have just one single instance of <code>IdIndex</code>?</p>
|
<python><multiprocessing><python-multiprocessing>
|
2025-10-20 13:13:08
| 1
| 15,684
|
Achim
|
79,794,914
| 1,783,801
|
Are GTK2 default translations accessible from Python
|
<p>When using GTK2 with Python, there are some things like <code>'gtk-yes'</code>, <code>'gtk-open'</code> which help in getting e.g. button names translated to the according user language.</p>
<p>What I am searching for are more of these: especially the string 'recently used', like in the file open dialog, and strings for 'files' and 'folders'.</p>
<p>Finally I would like to use something like (…yes, I know Python 2 is old…)</p>
<pre><code>label = "{} {}".format(GTK_STOCK_FILES, GTK_STOCK_RECENTLY_USED)
my_button.set_label(label)
</code></pre>
<p>to ensure that in (most) languages the label makes at least some sense to show the users that it opens recent files/folders.
Unfortunately something like <code>gtk.stock_lookup(gtk.STOCK_DIRECTORY)</code> only returns an empty string (although it works for <code>STOCK_OPEN</code>), also there is nothing like that for 'files' and 'recently' I could find.</p>
<p>How can I find, list, and access all the GTK built-in translated strings?</p>
|
<python><python-2.7><gtk><gtk2>
|
2025-10-20 12:50:03
| 1
| 670
|
Jaleks
|
79,794,882
| 1,446,710
|
Tkinter: treeview inner lines' styling
|
<p>I managed to style a treeview control to this nice gridview:</p>
<p><a href="https://i.sstatic.net/Jp5eYoL2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp5eYoL2.png" alt="enter image description here" /></a></p>
<pre><code>container = tk.Frame(self, relief='sunken', borderwidth=1, bg='white')
container.grid(column=0, row=0, sticky='nsew')
container.grid_columnconfigure(0, weight=1)
container.grid_rowconfigure(0, weight=1)
style = ttk.Style()
style.layout('NoBorder.Treeview', [('Flat.Treeview', {'border': 1})])
self.tree = ttk.Treeview(container, columns=self.header, show='headings', style='NoBorder.Treeview', height=0)
vsb = ttk.Scrollbar(container, orient="vertical", command=self.tree.yview)
hsb = ttk.Scrollbar(container, orient="horizontal", command=self.tree.xview)
corner = Frame(container, bg='SystemButtonFace')
self.tree.configure(yscrollcommand=vsb.set, xscrollcommand=hsb.set)
self.tree.grid(column=0, row=0, sticky='nsew')
vsb.grid(column=1, row=0, sticky='ns')
hsb.grid(column=0, row=1, sticky='ew')
corner.grid(column=1, row=1, sticky='nsew')
</code></pre>
<p>This way, the border around it is done by the container.</p>
<p>Can I somehow change this style to draw <strong>inner</strong> lines? After each row, and each column.</p>
|
<python><tkinter>
|
2025-10-20 12:17:23
| 0
| 2,725
|
Daniel
|
79,794,872
| 9,740,712
|
How to specify the name while concurrently removing indexes from database
|
<p>I have some field in my table for which i need to remove indexing. In the django application, i saw that i could do this using the migrations.RemoveIndexConcurrently() method. However im having confusions on how to specify the name attribute with it. The previously said indexed fields were added during the time of creating the table and hence there is no separate AddIndex migration. Need to remove indexing for these fields in 2 different environments and when i looked up the names using</p>
<pre><code>SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'my_db_table_name'
</code></pre>
<p>i saw indexnames like
<code>user_secondary_mail_779d505a_like</code>, which could be different in the second environment. Is there any way i could specify the names of the fields so that i could run the migration in both environments. Any help would be greatly appreciated!</p>
|
<python><django><postgresql><indexing>
|
2025-10-20 11:59:50
| 0
| 499
|
p.ry
|
79,794,657
| 8,936,561
|
How to build a single-file namespace library with uv?
|
<p>I have a project structured like this:</p>
<pre class="lang-none prettyprint-override"><code><project-root>
├── pyproject.toml
└── src
└── a_namespace
└── a_module.py
</code></pre>
<p>With Poetry, I can install and package this project using the following settings:</p>
<pre class="lang-toml prettyprint-override"><code>[tool.poetry]
packages = [
{ include = "a_namespace/a_module.py", from = "src" },
]
</code></pre>
<p>However, I can’t get this to work with uv. According to the <a href="https://docs.astral.sh/uv/concepts/build-backend/#namespace-packages" rel="nofollow noreferrer">documentation</a>, I need to do this:</p>
<pre class="lang-toml prettyprint-override"><code>[tool.uv.build-backend]
module-name = "a_namespace.a_module"
</code></pre>
<p>But this results in the following error:</p>
<pre class="lang-none prettyprint-override"><code>× Failed to build ... @ file:///...`
╰─▶ Expected a Python module at: src/a_namespace/a_module/__init__.py
</code></pre>
<p>Adding <code>.py</code> doesn't solve the problem.</p>
<pre class="lang-toml prettyprint-override"><code>[tool.uv.build-backend]
module-name = "a_namespace.a_module.py"
</code></pre>
<pre class="lang-none prettyprint-override"><code>× Failed to build ... @ file:///...`
╰─▶ Expected a Python module at: src/a_namespace/a_module/py/__init__.py
</code></pre>
<p><a href="https://packaging.python.org/en/latest/specifications/pyproject-toml/#import-names" rel="nofollow noreferrer">import-names</a> and <a href="https://packaging.python.org/en/latest/specifications/pyproject-toml/#import-namespaces" rel="nofollow noreferrer">import-namespaces</a> don't seem to work either.</p>
<pre class="lang-toml prettyprint-override"><code>[project]
import-namespaces = ["a_namespace"]
import-names = ["a_namespace.a_module"]
</code></pre>
|
<python><uv>
|
2025-10-20 06:45:56
| 1
| 988
|
Nattōsai Mitō
|
79,794,605
| 1,446,710
|
How can I set 25-25-50 ratio between grid children?
|
<p>In tkinter I wish to achieve the above goal.</p>
<p>I can perfectly get 50-50, and 25-25-25-25. But tried everything to get 25-25-50 without success:</p>
<pre><code>import tkinter as tk
root = tk.Tk()
root.geometry("900x300")
frame = tk.Frame(root)
frame.pack(fill="both", expand=True)
# 25%, 25%, 50%
frame.columnconfigure(0, weight=1)
frame.columnconfigure(1, weight=1)
frame.columnconfigure(2, weight=2)
frame.rowconfigure(1, weight=1)
labels = [
("Field1", "#add8e6"),
("Field2", "#90ee90"),
("Field3", "#f08080"),
]
for col, (text, color) in enumerate(labels):
tk.Label(frame, text=text, bg=color).grid(
row=0, column=col, sticky="n", padx=2, pady=2)
tk.Listbox(frame).grid(
row=1, column=col, sticky="nsew", padx=5, pady=5)
root.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/LR1WBpMd.png" rel="noreferrer"><img src="https://i.sstatic.net/LR1WBpMd.png" alt="enter image description here" /></a></p>
<p><strong>Why are these same label and listbox items sized differently, and how can I make them sized correctly?</strong></p>
|
<python><tkinter>
|
2025-10-20 04:40:11
| 1
| 2,725
|
Daniel
|
79,794,169
| 835,073
|
Can we preserve the ordering of polynomial terms with sympy?
|
<p>I want to render <code>y - x^2</code> rather than <code>-x^2 +y</code> below.</p>
<pre class="lang-py prettyprint-override"><code>def f(x, y):
return y - x**2 # this expression will be changed later on
# so the rendered output must respect it!
</code></pre>
<p><a href="https://i.sstatic.net/AXsFta8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AXsFta8J.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>from sympy import latex, init_printing
from sympy.abc import x, y
from IPython.display import display, Math
init_printing(order='lex')
def f(x, y):
return y - x**2 # this expression will be changed later on
# so the rendered output must respect it!
for i in range(1,4):
tex = f"f(x,y)={latex(f(x, y))}={i}"
display(Math(tex))
</code></pre>
<p>Is it possible?</p>
|
<python><sympy>
|
2025-10-19 08:26:56
| 1
| 880
|
D G
|
79,793,948
| 5,709,144
|
Problem installing jupyter-book 2 with the uv package manager
|
<p>I am attempting to create a jupyter book 2.0 (<a href="https://next.jupyterbook.org/" rel="nofollow noreferrer">https://next.jupyterbook.org/</a>) project in a virtual envinoment managed by the <code>uv</code> package manager (<a href="https://docs.astral.sh/uv/" rel="nofollow noreferrer">https://docs.astral.sh/uv/</a>). However, the jupyter book 2.0 package never seems to download correctly.</p>
<p>These are my steps. I open a terminal, navigate to a folder, and type</p>
<pre class="lang-bash prettyprint-override"><code>uv init book
cd book
uv venv
source .venv/bin/activate
uv pip install --pre "jupyter-book==2.*"
</code></pre>
<p>This installs eighty packages, including</p>
<pre class="lang-bash prettyprint-override"><code> + jupyter-book==2.0.0b3
+ jupyter-client==8.6.3
+ jupyter-core==5.9.1
+ jupyter-events==0.12.0
+ jupyter-server==2.17.0
+ jupyter-server-terminals==0.5.3
+ jupyterlab-pygments==0.3.0
</code></pre>
<p>However, when I check the jupyter-book version with <code>jupyter-book --version</code>, I get</p>
<pre class="lang-bash prettyprint-override"><code>Jupyter Book : 1.0.4.post1
External ToC : 1.0.1
MyST-Parser : 3.0.1
MyST-NB : 1.3.0
Sphinx Book Theme : 1.1.4
Jupyter-Cache : 1.0.1
NbClient : 0.10.2
</code></pre>
<p>If I run <code>jupyter book start book</code>, the files created include <code>_config.yml</code> and <code>_toc.yml</code>. These are files created by the original Jupyter Book project, which should now be combined into one <code>myst.yml</code> file in the Jupyter Book 2 project.</p>
<p>What am I doing wrong? The machine is running macOS Tahoe 26.0.1.</p>
|
<python><uv><jupyterbook>
|
2025-10-18 20:11:56
| 3
| 1,216
|
amunnelly
|
79,793,899
| 1,232,087
|
pip uninstall fails for databricks-dlt library
|
<p>In Databricks, I had installed Databricks dlt library from <a href="https://pypi.org/project/databricks-dlt/" rel="nofollow noreferrer">here</a> using the command <code>pip install databricks-dlt</code>. But now when I try to uninstall using the command <code>pip uninstall databricks-dlt</code>, I get the following error. How can I uninstall this library:</p>
<blockquote>
<p>PipError: Command 'pip --disable-pip-version-check uninstall databricks-dlt' returned non-zero exit status 2.</p>
</blockquote>
<p><strong>Error Details</strong>:</p>
<pre><code>Found existing installation: databricks-dlt 0.3.0
Uninstalling databricks-dlt-0.3.0:
Would remove:
/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/databricks_dlt-0.3.0.dist-info/*
/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/dlt/*
ERROR: Exception:
Traceback (most recent call last):
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 106, in _run_wrapper
status = _inner_run()
^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/pip/_internal/cli/base_command.py", line 97, in _inner_run
return self.run(options, args)
^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/pip/_internal/commands/uninstall.py", line 106, in run
uninstall_pathset = req.uninstall(
^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/pip/_internal/req/req_install.py", line 723, in uninstall
uninstalled_pathset.remove(auto_confirm, verbose)
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/pip/_internal/req/req_uninstall.py", line 364, in remove
if auto_confirm or self._allowed_to_proceed(verbose):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/pip/_internal/req/req_uninstall.py", line 404, in _allowed_to_proceed
return ask("Proceed (Y/n)? ", ("y", "n", "")) != "n"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/pip/_internal/utils/misc.py", line 228, in ask
_check_no_input(message)
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-c872873f-ebba-4c11-8d9a-eeff12cab0bb/lib/python3.12/site-packages/pip/_internal/utils/misc.py", line 220, in _check_no_input
raise Exception(
Exception: No input was expected ($PIP_NO_INPUT set); question: Proceed (Y/n)?
</code></pre>
|
<python><pip><databricks>
|
2025-10-18 18:02:14
| 1
| 24,239
|
nam
|
79,793,878
| 3,741,030
|
Combining Identically Indexed and Column Dataframes into 3d Dataframe
|
<p>I have 3 2D DataFrames, all with identical indexes (datetime range) and column names, but different data for these labels. I would like to combine these three 2D dataframes into 1 3D DataFrame with an additional index.</p>
<p>Brief example, suppose I have 3 2D DataFrames <code>a, b, c</code> where <code>a['name']['index']</code> represents metric <code>a</code> for dataset <code>name</code> on date <code>index</code>.</p>
<p>I would like to combine these DataFrames, say into <code>df_all</code> that I can address like the following to get the same information:</p>
<p><code>df_all['name']['a']['index']</code></p>
<p>and for metric b of the same dataset on the same date:</p>
<p><code>df_all['name']['b']['index']</code></p>
<p>I cannot figure out how to do this. I cannot use xarray, it must be with numpy and pandas.</p>
<p>MCVE:</p>
<pre><code>import pandas as pd
dr = pd.date_range(dt.datetime(2025, 1, 1), dt.datetime(2025,1,5))
a = pd.DataFrame(index=dr, data={"foo":[0,1,2,3,4], "bar":[5,6,7,8,9]})
b = a*2
c = a*3
>>> a
foo bar
2025-01-01 0 5
2025-01-02 1 6
2025-01-03 2 7
2025-01-04 3 8
2025-01-05 4 9
>>> b
foo bar
2025-01-01 0 10
2025-01-02 2 12
2025-01-03 4 14
2025-01-04 6 16
2025-01-05 8 18
>>> c
foo bar
2025-01-01 0 15
2025-01-02 3 18
2025-01-03 6 21
2025-01-04 9 24
2025-01-05 12 27
>>> a['foo']['1-3-2025']
2
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2025-10-18 17:14:12
| 2
| 1,599
|
cma0014
|
79,793,610
| 1,867,328
|
Calculating mean for a column of arrays in pandas
|
<p>I have below <code>pandas</code> dataframe</p>
<pre><code>import pandas as pd
import numpy as np
dat = pd.DataFrame({
'A': [1,2,3],
'B': [[[np.nan, 0.0, 0.0, 0.0, 0.0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 1]],
[[0.32894736842105265, 0.039473684210526314, 0.009868421052631578, 0.009868421052631578, 0.03289473684210526], [0.038461538461538464, 0.07692307692307693, 0.07692307692307693, 0.038461538461538464, 0.15384615384615385], [0.3333333333333333, 0.0, 0.0, 0.0, 0.6666666666666666], [0.0, 0.0, 0.0, 0.0, 1.0], [0, 0, 0, 0, 1]],
[[0.4765625, 0.03125, 0.03125, 0.0078125, 0.0078125], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 1]]]})
</code></pre>
<p>Now I would like to get the column mean for the column <code>B</code>. Given that each element of column <code>B</code> is array, obviously, mean should also be another array with same dimension.</p>
<p>Is there any way to achieve this? I tried with <code>dat['B'].mean(axis=1)</code>, but get below error,</p>
<pre><code>Traceback (most recent call last):
File "/Users/system/lib/python3.12/site-packages/pandas/core/generic.py", line 576, in _get_axis_number
return cls._AXIS_TO_AXIS_NUMBER[axis]
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
KeyError: 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/system/lib/python3.12/site-packages/pandas/core/series.py", line 6549, in mean
return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/system/lib/python3.12/site-packages/pandas/core/generic.py", line 12420, in mean
return self._stat_function(
^^^^^^^^^^^^^^^^^^^^
File "/Users/system/lib/python3.12/site-packages/pandas/core/generic.py", line 12377, in _stat_function
return self._reduce(
^^^^^^^^^^^^^
File "/Users/system/lib/python3.12/site-packages/pandas/core/series.py", line 6439, in _reduce
self._get_axis_number(axis)
File "/Users/system/lib/python3.12/site-packages/pandas/core/generic.py", line 578, in _get_axis_number
raise ValueError(f"No axis named {axis} for object type {cls.__name__}")
ValueError: No axis named 1 for object type Series
</code></pre>
|
<python><pandas>
|
2025-10-18 06:51:08
| 1
| 3,832
|
Bogaso
|
79,793,500
| 1,604,008
|
How do I import a base class into derived classes?
|
<p>Forgive me, I'm new to python. I hope I'm missing something.</p>
<p>Consider the following directory structure:</p>
<pre><code>dev
baseClass
SomeApplication
main1
DerivedClassA_usedInMain1
SomeOtherApp
main2
DerivedClassB_usedInMain2
</code></pre>
<p>I don't see any 'realistic' way of doing this in python. How do I import baseClass into the derived classes?</p>
|
<python><class><oop><import>
|
2025-10-17 23:59:15
| 0
| 1,159
|
user1604008
|
79,793,455
| 219,153
|
Is there a Pythonic way to get a value of a variable in a "case" statement?
|
<p>Python match statement creates problems with constants, e.g. this script will not compile:</p>
<pre><code>A, B = 13, 42
x = A
match x:
case A: print('a')
# case int(A): print('a')
# case 13: print('a')
case B: print('b')
</code></pre>
<p>unless one of commented out lines is used instead. Is there a more Pythonic and more general way to get the value of <code>A</code> than <code>int(A)</code>?</p>
|
<python><structural-pattern-matching>
|
2025-10-17 21:51:42
| 0
| 8,585
|
Paul Jurczak
|
79,793,436
| 984,003
|
Stripe checkout.Session.create : if/else for parameters for Python
|
<p>I have the following call to make a checkout session.</p>
<pre><code> session = stripe.checkout.Session.create(
success_url=success_url,
cancel_url=cancel_url,
mode='payment',
line_items=[{
'price': price_id,
'quantity': 1
}],
payment_intent_data={
"metadata": {
"mykey": "myvalue",
}
}
)
</code></pre>
<p>Sometimes I want to add</p>
<pre><code>customer_creation = 'always'
</code></pre>
<p>And sometimes I want to add</p>
<pre><code>customer=customer_id,
</code></pre>
<p>Is there a way to do that IF ELSE without copying the whole thing inside the IF and then again inside the ELSE? I also have other parameters that I sometimes include so then it would be copied four times.</p>
<p>Like, if I could build up a dictionary, and then add that to the session?</p>
<p>I tried setting the unwanted parameters to None, but it doesn't allow that.</p>
<p>This is Python 2.7. Yup.</p>
|
<python><stripe-payments>
|
2025-10-17 21:18:30
| 1
| 29,851
|
user984003
|
79,793,423
| 13,785,011
|
PyCharm autoformatter aggressively wraps SQL keywords inside Python triple-quoted strings
|
<p>I’m writing SQL inside Python using triple-quoted strings, like this:</p>
<pre><code>schema = """
CREATE TABLE IF NOT EXISTS peers (
id INTEGER PRIMARY KEY AUTOINCREMENT,
host TEXT NOT NULL,
port INTEGER NOT NULL,
UNIQUE (host, port)
)
"""
</code></pre>
<p>However, PyCharm’s autoformatter rewrites it to this unreadable form:</p>
<pre><code>schema = """
CREATE TABLE IF NOT EXISTS peers (
id
INTEGER
PRIMARY
KEY
AUTOINCREMENT,
host
TEXT
NOT
NULL,
port
INTEGER
NOT
NULL,
UNIQUE (
host,
port) ) \
"""
</code></pre>
<p>I tried disabling all wrapping options I could see in Settings → Editor → Code Style → SQL.</p>
<p>Using <code>textwrap.dedent</code> doesn’t help, because the problem is not Python indentation, but PyCharm’s formatter.</p>
<p>Disabling all formatting under Settings → Editor → Code Style → SQL prevents PyCharm from reformatting the SQL, but I still want to keep the SQL readable inside Python strings.</p>
<p>Is there a way in PyCharm to disable the aggressive keyword wrapping inside Python triple-quoted strings while keeping other autoformatting features enabled?</p>
<p>Any guidance or workaround would be appreciated.</p>
<p>Additional context:</p>
<ul>
<li>Pycharm 2025.2.3</li>
<li>tried resetting to the defaults</li>
<li>all dialects inherit general SQL</li>
<li>I have turned on Tools → Actions on save → Reformat code</li>
</ul>
|
<python><pycharm><formatting>
|
2025-10-17 20:46:14
| 1
| 469
|
Unlucky
|
79,793,380
| 1,232,087
|
Unable to import pyspark.pipelines module
|
<p>What could be a cause of the following error of my code in a Databricks notebook, and how can we fix the error?</p>
<pre class="lang-none prettyprint-override"><code>ImportError: cannot import name 'pipelines' from 'pyspark' (/databricks/python/lib/python3.12/site-packages/pyspark/__init__.py)
</code></pre>
<p>This is the top line of the Databricks notebook that throws the error:</p>
<pre><code>from pyspark import pipelines as dp
</code></pre>
<p>According to the following quote from <a href="https://docs.databricks.com/gcp/en/ldp/developer/python-dev#basics-of-python-for-pipeline-development" rel="nofollow noreferrer">Basics of Python for pipeline development</a> from Databricks' team, we need to import the above module for creating <em>Lakeflow Declarative pipelines</em> using Python:</p>
<blockquote>
<p>All Lakeflow Declarative Pipelines Python APIs are implemented in the <code>pyspark.pipelines</code> module.</p>
</blockquote>
<p>Also, as we know PySpark is an integral and primary programming interface used within the Databricks platform. So, what I may be missing here that causes the error?</p>
|
<python><pyspark><databricks>
|
2025-10-17 19:09:26
| 1
| 24,239
|
nam
|
79,793,348
| 388,520
|
cartopy - round polar projections for just the polar regions?
|
<p>I am trying to draw a pair of round polar projections centered on the north and south (celestial) poles as an adjunct to a more conventional star map. These should only cover about 20 degrees of latitude each.</p>
<p>I can get cartopy to draw reasonable-looking polar projections of <em>the entire celestial sphere</em> centered on either pole, but if I try to restrict the range of latitude, an azimuthal-equidistant plot collapses to a vertical line(!) and a stereographic projection becomes square, like this:</p>
<p><a href="https://i.sstatic.net/TMMNI9oJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMMNI9oJ.png" alt="enter image description here" /></a></p>
<ul>
<li>Top left: azeq projection of the whole celestial sphere centered on the north pole.</li>
<li>Top right: stereo projection of the whole celestial sphere centered on the south pole.</li>
<li>Bottom left: azeq projection from 70°N to the north pole.</li>
<li>Bottom right: stereo projection from 70°S to the south pole.</li>
</ul>
<p>What I want is round plots like on top, but cut off at 70°N and 70°S respectively.</p>
<p>Code I have now:</p>
<pre><code>from cartopy import crs
from math import pi as PI
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
CEL_SPHERE = crs.Globe(
ellipse=None,
semimajor_axis=180/PI,
semiminor_axis=180/PI,
)
PC_GALACTIC = crs.PlateCarree(globe=CEL_SPHERE)
def render_map(path, width, height):
fig = plt.figure(layout="constrained", figsize=(width, height))
try:
gs = GridSpec(2, 2, figure=fig)
axN1 = fig.add_subplot(
gs[0, 0],
projection=crs.AzimuthalEquidistant(
central_latitude=90,
globe=CEL_SPHERE,
)
)
axN1.gridlines(draw_labels=True)
axS2 = fig.add_subplot(
gs[0, 1],
projection=crs.SouthPolarStereo(globe=CEL_SPHERE)
)
axS2.gridlines(draw_labels=True)
axN2 = fig.add_subplot(
gs[1, 0],
projection=crs.AzimuthalEquidistant(
central_latitude=90,
globe=CEL_SPHERE,
)
)
axN2.set_extent((-180, 180, 70, 90), crs=PC_GALACTIC)
axN2.gridlines(draw_labels=True)
axS2 = fig.add_subplot(
gs[1, 1],
projection=crs.SouthPolarStereo(globe=CEL_SPHERE)
)
axS2.set_extent((-180, 180, -90, -70), crs=PC_GALACTIC)
axS2.gridlines(draw_labels=True)
fig.savefig(path)
finally:
plt.close(fig)
if __name__ == "__main__":
render_map("map_test.pdf", 12, 12)
</code></pre>
|
<python><matplotlib><cartopy>
|
2025-10-17 18:29:16
| 1
| 142,389
|
zwol
|
79,793,337
| 11,091,255
|
Transferring large file over CAN with Python canopen library unexpectedly aborts early
|
<p>We are attempting to implement firmware updates with CANopenNode. It would seem that to do this the new firmware image needs to be sent to object dictionary entry 0x1F50:1. When we try this, the process exits early, and not always after sending the same number of bytes. At times, it could fail after sending a few dozen, at other times, we can get up to 5k sent. This doesn't appear to be a failing on the receiver side as all other CAN communications execute as normal after the file transfer aborts. The specific error that gets reported back through Python is <code>Code 0x05040001, Unknown SDO command specified</code>. Naturally we have done searches on this error code number, but the search results have not turned up anything helpful as to why this process might be failing. The routine in Python to send the file is as follows:</p>
<pre><code>def do_sdo_file_upload(filename, index : int, subindex : int = 0x0) :
info_text = "@[" + hex(index) + "][" + hex(subindex) + "]"
try :
logger.info("doing file upload")
logger.info("OD entry supports 'open' method: " \
+ str(hasattr(node.sdo, 'open')))
filesize = os.path.getsize(filename)
logger.info("size of " + filename + " is " + str(filesize) + " bytes")
node.sdo.MAX_RETRIES = 6
node.sdo.PAUSE_BEFORE_SEND = 0.001
node.sdo.RESPONSE_TIMEOUT = 1.0
with open(args.filename, 'rb') as f_in, \
node.sdo.open(\
index = index, subindex = subindex, \
mode = 'wb', size = filesize, buffering = 0x00, block_transfer = False) as f_out :
bytes_sent = 0x00
chunk_size_max = 0x40
while bytes_sent < filesize :
if chunk_size_max <= (filesize - bytes_sent) :
chunk_size = chunk_size_max
else :
chunk_size = filesize - bytes_sent
chunk = f_in.read(chunk_size)
if not chunk :
break
f_out.write(chunk)
bytes_sent += chunk_size
f_out.flush()
except Exception as e :
logger.error("failed to send file to " + info_text)
log_and_bail(e)
</code></pre>
<p>One thing to note, is that most examples of doing the SDO file write specify setting <code>block_transfer = True</code>, however, if we do that, we get this error and no bytes at all are sent <code>AttributeError: 'BlockDownloadStream' object has no attribute 'crc_supported'</code>. Also note that the modifications to the SDO variables (<code>MAX_RETRIES</code>, etc.) are just in there as values to tweak to see if anything improves the performance (so far, they haven't). At this stage it is not clear what else we should look at.</p>
|
<python><canopen>
|
2025-10-17 18:11:14
| 0
| 305
|
Christopher Theriault
|
79,793,215
| 1,708,977
|
Significant import discrepancy in Python CDK v2 when enabling API Gateway logging
|
<p>I have been modifying an AWS CDK v2 stack (Python) to generate an API Gateway v2 stage with logging enabled. Sounds simple, there are countless examples on the internet, and countless agentic coding tools willing to generate it. Yet, not one of them works for me.</p>
<p>My existing CDK snippet looks like:</p>
<pre class="lang-py prettyprint-override"><code> # HTTP API Gateway with logging enabled
http_api = apigwv2.HttpApi(
self, "ServiceHttpApi",
api_name="ServiceApi",
description="API Gateway for Service"
)
</code></pre>
<p>with imports:</p>
<pre class="lang-py prettyprint-override"><code>import aws_cdk as cdk
from aws_cdk import (
aws_ecs as ecs,
aws_ec2 as ec2,
aws_ecr_assets as ecr_assets,
aws_logs as logs,
aws_certificatemanager as acm,
aws_elasticloadbalancingv2 as elbv2,
aws_apigatewayv2 as apigwv2,
aws_apigateway as apigwv1,
aws_apigatewayv2_integrations as apigwv2_integrations,
aws_iam as iam
)
from constructs import Construct
import os
</code></pre>
<p>Which I enhance with something like (countless other permutations tried):</p>
<pre class="lang-py prettyprint-override"><code> # HTTP API Gateway with logging enabled
http_api = apigwv2.HttpApi(
self, "ServiceHttpApi",
api_name="ServiceApi",
description="API Gateway for Service"
)
stage = apigwv2.HttpStage(self, "Stage",
http_api=http_api,
access_log_settings=apigwv2.AccessLogSettings(
destination=apigwv2.LogGroupLogDestination(api_log_group),
format=apigwv2.AccessLogFormat.json_with_standard_fields()
)
)
</code></pre>
<p>The errors I am getting include:</p>
<pre><code>'aws_cdk.aws_apigatewayv2' has no attribute 'AccessLogSettings'
'aws_cdk.aws_apigatewayv2' has no attribute 'AccessLogFormat'
</code></pre>
<p>In addition to verifying my versions were acceptable:</p>
<pre><code>aws-cdk-lib>=2.220.0,<3.0.0
constructs>=10.0.0,<11.0.0
</code></pre>
<p>I also ran dir on the imports to see what's included:</p>
<pre class="lang-py prettyprint-override"><code>>>> import aws_cdk
>>> dir(aws_cdk.aws_apigatewayv2)
['AccessLogDestinationConfig', 'AddApiKeyOptions', 'AddRoutesOptions', 'ApiGatewayManagedOverridesReference', 'ApiKey', 'ApiKeyOptions', 'ApiKeyProps', 'ApiMapping', 'ApiMappingAttributes', 'ApiMappingProps', 'ApiMappingReference', 'ApiReference', 'AuthorizerPayloadVersion', 'AuthorizerReference', 'BatchHttpRouteOptions', 'CfnApi', 'CfnApiGatewayManagedOverrides', 'CfnApiGatewayManagedOverridesProps', 'CfnApiMapping', 'CfnApiMappingProps', 'CfnApiProps', 'CfnAuthorizer', 'CfnAuthorizerProps', 'CfnDeployment', 'CfnDeploymentProps', 'CfnDomainName', 'CfnDomainNameProps', 'CfnIntegration', 'CfnIntegrationProps', 'CfnIntegrationResponse', 'CfnIntegrationResponseProps', 'CfnModel', 'CfnModelProps', 'CfnRoute', 'CfnRouteProps', 'CfnRouteResponse', 'CfnRouteResponseProps', 'CfnRoutingRule', 'CfnRoutingRuleProps', 'CfnStage', 'CfnStageProps', 'CfnVpcLink', 'CfnVpcLinkProps', 'ContentHandling', 'CorsHttpMethod', 'CorsPreflightOptions', 'DeploymentReference', 'DomainMappingOptions', 'DomainName', 'DomainNameAttributes', 'DomainNameProps', 'DomainNameReference', 'EndpointOptions', 'EndpointType', 'GrantInvokeOptions', 'HttpApi', 'HttpApiAttributes', 'HttpApiProps', 'HttpAuthorizer', 'HttpAuthorizerAttributes', 'HttpAuthorizerProps', 'HttpAuthorizerType', 'HttpConnectionType', 'HttpIntegration', 'HttpIntegrationProps', 'HttpIntegrationSubtype', 'HttpIntegrationType', 'HttpMethod', 'HttpNoneAuthorizer', 'HttpRoute', 'HttpRouteAuthorizerBindOptions', 'HttpRouteAuthorizerConfig', 'HttpRouteIntegration', 'HttpRouteIntegrationBindOptions', 'HttpRouteIntegrationConfig', 'HttpRouteKey', 'HttpRouteProps', 'HttpStage', 'HttpStageAttributes', 'HttpStageOptions', 'HttpStageProps', 'IAccessLogDestination', 'IAccessLogSettings', 'IApi', 'IApiGatewayManagedOverridesRef', 'IApiKey', 'IApiMapping', 'IApiMappingRef', 'IApiRef', 'IAuthorizer', 'IAuthorizerRef', 'IDeploymentRef', 'IDomainName', 'IDomainNameRef', 'IHttpApi', 'IHttpAuthorizer', 'IHttpIntegration', 'IHttpRoute', 'IHttpRouteAuthorizer', 'IHttpStage', 'IIntegration', 'IIntegrationRef', 'IIntegrationResponseRef', 'IMappingValue', 'IModelRef', 'IRoute', 'IRouteRef', 'IRouteResponseRef', 'IRoutingRuleRef', 'IStage', 'IStageRef', 'IUsagePlan', 'IVpcLink', 'IVpcLinkRef', 'IWebSocketApi', 'IWebSocketAuthorizer', 'IWebSocketIntegration', 'IWebSocketRoute', 'IWebSocketRouteAuthorizer', 'IWebSocketStage', 'IntegrationCredentials', 'IntegrationReference', 'IntegrationResponseReference', 'IpAddressType', 'LogGroupLogDestination', 'MTLSConfig', 'MappingValue', 'ModelReference', 'ParameterMapping', 'PassthroughBehavior', 'PayloadFormatVersion', 'Period', 'QuotaSettings', 'RateLimitedApiKey', 'RateLimitedApiKeyProps', 'RouteReference', 'RouteResponseReference', 'RoutingRuleReference', 'SecurityPolicy', 'StageAttributes', 'StageOptions', 'StageReference', 'ThrottleSettings', 'UsagePlan', 'UsagePlanPerApiStage', 'UsagePlanProps', 'VpcLink', 'VpcLinkAttributes', 'VpcLinkProps', 'VpcLinkReference', 'WebSocketApi', 'WebSocketApiAttributes', 'WebSocketApiKeySelectionExpression', 'WebSocketApiProps', 'WebSocketAuthorizer', 'WebSocketAuthorizerAttributes', 'WebSocketAuthorizerProps', 'WebSocketAuthorizerType', 'WebSocketIntegration', 'WebSocketIntegrationProps', 'WebSocketIntegrationType', 'WebSocketNoneAuthorizer', 'WebSocketRoute', 'WebSocketRouteAuthorizerBindOptions', 'WebSocketRouteAuthorizerConfig', 'WebSocketRouteIntegration', 'WebSocketRouteIntegrationBindOptions', 'WebSocketRouteIntegrationConfig', 'WebSocketRouteOptions', 'WebSocketRouteProps', 'WebSocketStage', 'WebSocketStageAttributes', 'WebSocketStageProps', '__all__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '_private']
</code></pre>
<p>There are interfaces like <code>IAccessLogSettings</code>, but these objects (AccessLogSettings, AccessLogFormat), which <a href="https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_apigatewayv2/README.html" rel="nofollow noreferrer">AWS documentation</a> and countless Google searches claim should be there, are not.</p>
<p>Take this example from Gemini (I have received similar from Claude, GPT), which follows the same format. This also generates the same errors.</p>
<pre class="lang-py prettyprint-override"><code> from aws_cdk import aws_apigatewayv2 as apigwv2
# Assuming 'http_api' is your existing apigwv2.HttpApi instance
# and 'log_group' is the LogGroup created in the previous step.
stage = apigwv2.HttpStage(
self,
"MyHttpStage",
http_api=http_api,
stage_name="dev", # Or your desired stage name
access_log_settings=apigwv2.AccessLogSettings(
destination=apigwv2.LogGroupLogDestination(log_group),
format=apigwv2.AccessLogFormat.json_with_standard_fields(
caller=True,
http_method=True,
ip=True,
protocol=True,
request_time=date_time=True,
resource_path=True,
response_length=True,
status=True,
user=True,
),
),
)
</code></pre>
<p>Here's a full <code>cdk synth</code> output in case it is helpful.</p>
<pre class="lang-bash prettyprint-override"><code>% cdk synth
Traceback (most recent call last):
File "/.../cdk.py", line 9, in <module>
ServiceStack(app, "ServiceStack", env=ServiceEnv.get_env())
File "/.../.pyenv/versions/3.12.12/lib/python3.12/site-packages/jsii/_runtime.py", line 118, in __call__
inst = super(JSIIMeta, cast(JSIIMeta, cls)).__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../cdk/stack.py", line 151, in __init__
access_log_settings=apigwv2.AccessLogSettings(
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'aws_cdk.aws_apigatewayv2' has no attribute 'AccessLogSettings'. Did you mean: 'IAccessLogSettings'?
</code></pre>
<p>Platform: Mac M4 Pro
Python version: 3.12.12
CDK version: 2.220.0</p>
<p>This problem persists on a linux-based CI deployment system, so it isn't specific to my local box. I have hit a wall with this one. I am new to Python CDK, but not to Python nor CDK individually.</p>
<p>What is causing this discrepancy between published documentation for CDK and what my environment seems to have imported?</p>
|
<python><aws-api-gateway><aws-cdk>
|
2025-10-17 15:29:00
| 1
| 725
|
Bit Fracture
|
79,792,858
| 11,571,390
|
How to drop rows with a boolean mask in xarray/dask without .compute() blowing up memory?
|
<p>I’m trying to subset a large <code>xarray.Dataset</code> backed by <code>Dask</code> and save it back to Zarr, but I’m running into a major memory problem when attempting to drop rows with a boolean mask.</p>
<p>Here’s a minimal working example that matches my real-world setup:</p>
<pre><code>import numpy as np
import xarray as xr
import dask.array as da
import zarr
import zipfile
# Simulate a large dataset
pos_len = 100_000_000 # rows
sample_len = 100 # samples
chunks = (100_000, 100)
data = da.random.random((pos_len, sample_len), chunks=chunks)
xds = xr.Dataset(
{"some_var": (("pos", "sample_id"), data)},
coords={"pos": np.arange(pos_len), "sample_id": np.arange(sample_len)}
)
# Build a boolean mask based on mean coverage
coverage_array = "some_var"
min_coverage = 0.5
mask_1d = xds[coverage_array].mean(dim="sample_id", skipna=True) >= min_coverage
# Attempt to drop rows where mask is False
cds_masked = xds.where(mask_1d.compute(), other=np.nan, drop=True) # <--- memory explodes here
# Without .compute() I get:
# ValueError: This will result in a dask array of unknown shape. Such arrays are unsupported by Xarray. Please compute the indexer first using .compute()
</code></pre>
<ul>
<li>If I call <code>.compute()</code> on the mask, memory usage explodes and the process crashes.</li>
<li>If I skip <code>.compute()</code>, <code>Xarray</code> errors out because it doesn’t know the shape of the result.</li>
<li>If I use <code>drop=False</code>, I avoid the error — but then downstream operations have to handle the nans.</li>
</ul>
<p>My question:
Is there any memory-safe way to drop rows with a boolean mask in <code>Xarray</code>/<code>Dask</code> without fully computing the mask first? Or at least, without computing it in a way that explodes into ram and crashes the whole computer.</p>
<p>Or is this a fundamental limitation of <code>Xarray</code>/<code>Dask</code> due to unknown chunk shapes after boolean indexing?</p>
<p>Are there known workarounds or idioms for this pattern?</p>
|
<python><dask><python-xarray>
|
2025-10-17 07:41:49
| 0
| 595
|
Gary Frewin
|
79,792,772
| 3,433,802
|
How do I represent relationships? I get errors anyway I try
|
<p>I have a sqlmodel defined database and I'm having issues with how to use sqlmodel. This is a two-table reduction of my DB.</p>
<pre><code>from __future__ import annotations
import uuid
from datetime import datetime
from sqlalchemy import (
Column,
DateTime,
ForeignKey,
String,
Text,
UniqueConstraint,
text,
)
from sqlalchemy.dialects.postgresql import UUID
from sqlmodel import Field, Relationship, SQLModel
class Base(SQLModel):
pass
class User(Base, table=True):
__tablename__ = "users"
user_id: uuid.UUID | None = Field(
default=None,
sa_column=Column(
UUID(as_uuid=True),
primary_key=True,
server_default=text("uuid_generate_v4()"),
),
)
created: datetime | None = Field(
default=None,
sa_column=Column(
DateTime(timezone=True),
nullable=False,
server_default=text("now()"),
),
)
ms_oid: uuid.UUID | None = Field(
default=None,
sa_column=Column(UUID(as_uuid=True), unique=True),
)
username: str = Field(sa_column=Column(Text, nullable=False, unique=True))
roles: list["UserRole"] = Relationship(back_populates="user")
class UserRole(Base, table=True):
__tablename__ = "user_role"
role_id: uuid.UUID | None = Field(
default=None,
sa_column=Column(
UUID(as_uuid=True),
primary_key=True,
server_default=text("uuid_generate_v4()"),
),
)
created: datetime | None = Field(
default=None,
sa_column=Column(
DateTime(timezone=True),
nullable=False,
server_default=text("now()"),
),
)
created_by: uuid.UUID = Field(
sa_column=Column(
UUID(as_uuid=True), ForeignKey("users.user_id"), nullable=False
)
)
permission: str = Field(sa_column=Column(String(32), nullable=False))
user_id: uuid.UUID = Field(
sa_column=Column(
UUID(as_uuid=True), ForeignKey("users.user_id"), nullable=False
)
)
user: "User" = Relationship(
back_populates="roles",
sa_relationship_kwargs={"foreign_keys": "UserRole.user_id"},
)
__table_args__ = (UniqueConstraint("user_id", "permission", name="uq_user_id_permission"))
</code></pre>
<p>And I'm running a simple test script like this:</p>
<pre><code>import json
from sqlmodel import select
from traveler.db.models import User
from traveler.db.session import SessionLocal
with SessionLocal() as session:
stmt = select(User)
rows = session.execute(stmt).scalars().all()
print(json.dumps([row.model_dump(mode="json") for row in rows]))
</code></pre>
<p>And I get this error:</p>
<pre><code>sqlalchemy.exc.InvalidRequestError: When initializing mapper Mapper[User(users)], expression "relationship("list['UserRole']")" seems to be using a generic class as the argument to relationship(); please state the generic argument using an annotation, e.g. "roles: Mapped[list['UserRole']] = relationship()"
</code></pre>
<p>When I change my DB definition to comply with this error, like so:</p>
<pre><code>roles: Mapped[list["UserRole"]] = Relationship(back_populates="user")
</code></pre>
<p>I get this error:</p>
<pre><code>sqlalchemy.exc.InvalidRequestError: When initializing mapper Mapper[User(users)], expression "relationship("Mapped[list['UserRole']]")" seems to be using a generic class as the argument to relationship(); please state the generic argument using an annotation, e.g. "roles: Mapped[Mapped[list['UserRole']']] = relationship()"
</code></pre>
<p>Which just continues obviously if I add a double Mapped call.</p>
<p>What is going on here?</p>
|
<python><sqlalchemy><pydantic><sqlmodel>
|
2025-10-17 05:42:26
| 0
| 1,982
|
mmachenry
|
79,792,709
| 395,857
|
How can I serve OpenGVLab/InternVL3-1B with vLLM? Getting "ValueError: Failed to apply InternVLProcessor" error upon initialization
|
<p>How can I serve OpenGVLab/InternVL3-1B with vLLM?</p>
<p>I tried running:</p>
<pre><code>conda create -y -n vllm312 python=3.12
conda activate vllm312
pip install vllm
vllm serve OpenGVLab/InternVL3-1B --trust_remote_code
</code></pre>
<p>but I get the "ValueError: Failed to apply InternVLProcessor" error upon initialization:</p>
<pre><code>(EngineCore_DP0 pid=6370) ERROR 10-16 19:45:28 [core.py:708] File "/home/colligo/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1080, in call_hf_processor
(EngineCore_DP0 pid=6370) ERROR 10-16 19:45:28 [core.py:708] raise ValueError(msg) from exc
(EngineCore_DP0 pid=6370) ERROR 10-16 19:45:28 [core.py:708] ValueError: Failed to apply InternVLProcessor on data={'text': '<image><video>', 'images': [<PIL.Image.Image image mode=RGB size=5376x448 at 0x7F62C86AC140>], 'videos': [array([[[[255, 255, 255], [...]
</code></pre>
<hr />
<p>Full error stack:</p>
<pre><code>
[1;36m(EngineCore_DP0 pid=13781)[0;0m INFO 10-16 20:16:13 [parallel_state.py:1208] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
[1;36m(EngineCore_DP0 pid=13781)[0;0m WARNING 10-16 20:16:13 [topk_topp_sampler.py:66] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.
[1;36m(EngineCore_DP0 pid=13781)[0;0m WARNING 10-16 20:16:13 [__init__.py:2227] The following intended overrides are not keyword args and will be dropped: {'truncation'}
[1;36m(EngineCore_DP0 pid=13781)[0;0m WARNING 10-16 20:16:13 [processing.py:1089] InternVLProcessor did not return `BatchFeature`. Make sure to match the behaviour of `ProcessorMixin` when implementing custom processors.
[1;36m(EngineCore_DP0 pid=13781)[0;0m WARNING 10-16 20:16:13 [__init__.py:2227] The following intended overrides are not keyword args and will be dropped: {'truncation'}
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] EngineCore failed to start.
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] Traceback (most recent call last):
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/PIL/Image.py", line 3285, in fromarray
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] typemode, rawmode, color_modes = _fromarray_typemap[typekey]
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ~~~~~~~~~~~~~~~~~~^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] KeyError: ((1, 1, 3), '<i8')
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708]
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] The above exception was the direct cause of the following exception:
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708]
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] Traceback (most recent call last):
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1057, in call_hf_processor
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] output = hf_processor(**data,
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/model_executor/models/internvl.py", line 638, in __call__
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] text, video_inputs = self._preprocess_video(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/model_executor/models/internvl.py", line 597, in _preprocess_video
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] pixel_values_lst_video = self._videos_to_pixel_values_lst(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/model_executor/models/internvl.py", line 579, in _videos_to_pixel_values_lst
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] video_to_pixel_values_internvl(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/model_executor/models/internvl.py", line 301, in video_to_pixel_values_internvl
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] Image.fromarray(frame, mode="RGB"),
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/PIL/Image.py", line 3289, in fromarray
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] raise TypeError(msg) from e
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] TypeError: Cannot handle this data type: (1, 1, 3), <i8
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708]
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] The above exception was the direct cause of the following exception:
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708]
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] Traceback (most recent call last):
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 699, in run_engine_core
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] engine_core = EngineCoreProc(*args, **kwargs)
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 498, in __init__
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] super().__init__(vllm_config, executor_class, log_stats,
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/v1/engine/core.py", line 83, in __init__
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] self.model_executor = executor_class(vllm_config)
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 54, in __init__
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] self._init_executor()
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 54, in _init_executor
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] self.collective_rpc("init_device")
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 83, in collective_rpc
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] return [run_method(self.driver_worker, method, args, kwargs)]
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/utils/__init__.py", line 3122, in run_method
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] return func(*args, **kwargs)
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/worker/worker_base.py", line 259, in init_device
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] self.worker.init_device() # type: ignore
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py", line 201, in init_device
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] self.model_runner: GPUModelRunner = GPUModelRunner(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py", line 421, in __init__
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] self.mm_budget = MultiModalBudget(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/v1/worker/utils.py", line 48, in __init__
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] .get_max_tokens_per_item_by_nonzero_modality(model_config,
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/registry.py", line 167, in get_max_tokens_per_item_by_nonzero_modality
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] max_tokens_per_item = self.get_max_tokens_per_item_by_modality(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/registry.py", line 143, in get_max_tokens_per_item_by_modality
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] return profiler.get_mm_max_contiguous_tokens(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/profiling.py", line 282, in get_mm_max_contiguous_tokens
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] return self._get_mm_max_tokens(seq_len,
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/profiling.py", line 262, in _get_mm_max_tokens
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] mm_inputs = self._get_dummy_mm_inputs(seq_len, mm_counts)
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/profiling.py", line 173, in _get_dummy_mm_inputs
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] return self.processor.apply(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 2036, in apply
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ) = self._cached_apply_hf_processor(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1826, in _cached_apply_hf_processor
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ) = self._apply_hf_processor_main(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1572, in _apply_hf_processor_main
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] mm_processed_data = self._apply_hf_processor_mm_only(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1529, in _apply_hf_processor_mm_only
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] _, mm_processed_data, _ = self._apply_hf_processor_text_mm(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1456, in _apply_hf_processor_text_mm
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] processed_data = self._call_hf_processor(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/model_executor/models/internvl.py", line 952, in _call_hf_processor
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] processed_outputs = super()._call_hf_processor(prompt, mm_data,
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/model_executor/models/internvl.py", line 777, in _call_hf_processor
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] processed_outputs = super()._call_hf_processor(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1417, in _call_hf_processor
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] return self.info.ctx.call_hf_processor(
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] File "/home/dernoncourt/anaconda3/envs/vllm312/lib/python3.12/site-packages/vllm/multimodal/processing.py", line 1080, in call_hf_processor
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] raise ValueError(msg) from exc
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ValueError: Failed to apply InternVLProcessor on data={'text': '<image><video>', 'images': [<PIL.Image.Image image mode=RGB size=5376x448 at 0x7FECE46DA270>], 'videos': [array([[[[255, 255, 255],
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] [255, 255, 255],
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] [255, 255, 255],
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ...,
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] [255, 255, 255],
[...]
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] ...,
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] [255, 255, 255],
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] [255, 255, 255],
[1;36m(EngineCore_DP0 pid=13781)[0;0m ERROR 10-16 20:16:14 [core.py:708] [255, 255, 255]]]], shape=(243, 448, 448, 3))]} with kwargs={}
</code></pre>
|
<python><vllm>
|
2025-10-17 03:14:33
| 0
| 84,585
|
Franck Dernoncourt
|
79,792,670
| 395,857
|
How can I run the inference on the HunyuanImage-3.0 model?
|
<p>I follow the instructions on <a href="https://github.com/Tencent-Hunyuan/HunyuanImage-3.0" rel="nofollow noreferrer">https://github.com/Tencent-Hunyuan/HunyuanImage-3.0</a>:</p>
<pre><code>conda create -y -n hunyuan312 python=3.12
conda activate hunyuan312
# 1. First install PyTorch (CUDA 12.8 Version)
pip install torch==2.7.1 torchvision==0.22.1 torchaudio==2.7.1 --index-url https://download.pytorch.org/whl/cu128
# 2. Then install tencentcloud-sdk
pip install -i https://mirrors.tencent.com/pypi/simple/ --upgrade tencentcloud-sdk-python
git clone https://github.com/Tencent-Hunyuan/HunyuanImage-3.0.git
cd HunyuanImage-3.0/
# 3. Then install other dependencies
pip install -r requirements.txt
# Download from HuggingFace and rename the directory.
# Notice that the directory name should not contain dots, which may cause issues when loading using Transformers.
hf download tencent/HunyuanImage-3.0 --local-dir ./HunyuanImage-3
</code></pre>
<p>then I try running their <a href="https://github.com/Tencent-Hunyuan/HunyuanImage-3.0/tree/8e8105b2b0a1facb76ffdc356a7c8d2e3a9eb3cf?tab=readme-ov-file#2%EF%B8%8F%E2%83%A3-run-with-transformers" rel="nofollow noreferrer">example code</a>:</p>
<pre><code>from transformers import AutoModelForCausalLM
# Load the model
model_id = "./HunyuanImage-3"
# Currently we can not load the model using HF model_id `tencent/HunyuanImage-3.0` directly
# due to the dot in the name.
kwargs = dict(
attn_implementation="sdpa", # Use "flash_attention_2" if FlashAttention is installed
trust_remote_code=True,
torch_dtype="auto",
device_map="auto",
moe_impl="eager", # Use "flashinfer" if FlashInfer is installed
)
model = AutoModelForCausalLM.from_pretrained(model_id, **kwargs)
model.load_tokenizer(model_id)
# generate the image
prompt = "A brown and white dog is running on the grass"
image = model.generate_image(prompt=prompt, stream=True)
image.save("image.png")
</code></pre>
<p>But I get the error <code>OSError: No such device (os error 19)</code>:</p>
<pre><code>(hunyuan312) franck@server:/fun$ python generate_image_hyun.py
You are using a model of type hunyuan_image_3_moe to instantiate a model of type Hunyuan. This is not supported for all configurations of models and can yield errors.
`torch_dtype` is deprecated! Use `dtype` instead!
Loading checkpoint shards: 0%| | 0/32 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/fun/generate_image_hyun.py", line 21, in <module>
model = AutoModelForCausalLM.from_pretrained(model_id, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/franck/anaconda3/envs/hunyuan312/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 597, in from_pretrained
return model_class.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/franck/anaconda3/envs/hunyuan312/lib/python3.12/site-packages/transformers/modeling_utils.py", line 277, in _wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/franck/anaconda3/envs/hunyuan312/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5048, in from_pretrained
) = cls._load_pretrained_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/franck/anaconda3/envs/hunyuan312/lib/python3.12/site-packages/transformers/modeling_utils.py", line 5468, in _load_pretrained_model
_error_msgs, disk_offload_index = load_shard_file(args)
^^^^^^^^^^^^^^^^^^^^^
File "/home/franck/anaconda3/envs/hunyuan312/lib/python3.12/site-packages/transformers/modeling_utils.py", line 831, in load_shard_file
state_dict = load_state_dict(
^^^^^^^^^^^^^^^^
File "/home/franck/anaconda3/envs/hunyuan312/lib/python3.12/site-packages/transformers/modeling_utils.py", line 484, in load_state_dict
with safe_open(checkpoint_file, framework="pt") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: No such device (os error 19)
</code></pre>
<p>How can I fix it?</p>
<p>Same issue if I try running:</p>
<pre><code>python3 run_image_gen.py \
--model-id ./HunyuanImage-3/ \
--verbose 1 \
--prompt "A brown and white dog is running on the grass."
</code></pre>
|
<python><artificial-intelligence><image-generation><tencent>
|
2025-10-17 00:58:16
| 0
| 84,585
|
Franck Dernoncourt
|
79,792,257
| 12,415,855
|
Creating EMA using TA_Lib with python?
|
<p>i try to create an EMA calculation for a stock using talib with the following code:</p>
<pre><code>import yfinance as yf
from datetime import datetime, timedelta
import talib
tday = datetime.today()
startDay = tday - timedelta(days=365)
df = yf.download("AAPL", start=startDay, end=tday)
tmpErgSeries = talib.EMA(df["Close"], timeperiod=5)
</code></pre>
<p>But i get this error message:</p>
<pre><code>(seleniumall) C:\DEV\Fiverr2025\ORDER\VanaromHuot\StockCandle>python test.py
C:\DEV\Fiverr2025\ORDER\VanaromHuot\StockCandle\test.py:7: FutureWarning: YF.download() has changed argument auto_adjust default to True
df = yf.download("AAPL", start=startDay, end=tday)
[*********************100%***********************] 1 of 1 completed
Traceback (most recent call last):
File "C:\DEV\Fiverr2025\ORDER\VanaromHuot\StockCandle\test.py", line 9, in <module>
tmpErgSeries = talib.EMA(df["Close"], timeperiod=5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\DEV\.venv\seleniumall\Lib\site-packages\talib\__init__.py", line 80, in wrapper
result = func(*_args, **_kwds)
^^^^^^^^^^^^^^^^^^^^^
TypeError: Argument 'real' has incorrect type (expected numpy.ndarray, got DataFrame)
</code></pre>
<p>How can i create the EMA for this stock using talib?</p>
|
<python><yfinance><ta-lib>
|
2025-10-16 14:18:09
| 1
| 1,515
|
Rapid1898
|
79,792,241
| 696,836
|
Watching for changes on an array of a document in a Mongo collection only triggers on the second push, not the first
|
<p>Say I have a document in a Mongo collection that looks like the following:</p>
<pre class="lang-json prettyprint-override"><code>{
"_id": "01:550204",
"somefield1": "someValue1",
"somefield2": "someValue2",
"homeTime": {
"homeTimeInfo": [
{
"driverAsstReqPk": { "$numberLong": "8003" }
"city": "Chicago",
"state": "IL",
"zip": "60652"
}]
}
}
</code></pre>
<p>I'm writing a connector for a Kafka topic that will monitor for changes or additions to the <code>homeTimeInfo</code> item. It also needs to exclude updates to a <code>geoData</code> field that will be inserted into the <code>homeTimeInfo</code> item. I've written the following Python script to use the same connector pipeline to demonstrate.</p>
<pre class="lang-py prettyprint-override"><code>from pymongo import MongoClient
from pprint import pprint
def start_client():
connection_string = "mongodb://<someuser>:<somepass>@localhost/?retryWrites=true&w=majority"
client = MongoClient(
connection_string,
tls=False,
)
client.admin.command('ping')
print("Connected to Mongo")
return client
def change_stream(client):
db = client['driver']
collection = db['dev_drivers']
pipeline = [
{
"$match": {
"$or": [
{"operationType": "insert"},
{
"$and": [
{"operationType": "update"},
{
"$expr": {
"$gt": [
{
"$size": {
"$filter": {
"input": {
"$objectToArray": "$updateDescription.updatedFields"
},
"as": "field",
"cond": {
"$and": [
{
"$regexMatch": {
"input": "$$field.k",
"regex": "^homeTime"
}
},
{
"$not": {
"$regexMatch": {
"input": "$$field.k",
"regex": "^homeTime\\.homeTimeInfo\\.[0-9]\\d*\\.geoData\\."
}
}
}
]
}
}
}
},
0
]
}
}
]
}
]
}
},
{
"$addFields": {
"updatedHomeTimeInfoArray": {
"$map": {
"input": {
"$filter": {
"input": {
"$objectToArray": "$updateDescription.updatedFields"
},
"as": "field",
"cond": {
"$regexMatch": {
"input": "$$field.k",
"regex": "^homeTime\\.homeTimeInfo\\."
}
}
}
},
"as": "item",
"in": "$$item.v"
}
},
}
},
{
"$addFields": {
"updatedHomeTimeInfo": {
"$first": "$updatedHomeTimeInfoArray"
}
}
},
{
"$project": {
"mongoEntityLocation": "driver.hometimes.homeTimeInfo",
"id": {"$concat": [
"$fullDocument._id",
",",
{"$convert": {"input": "$updatedHomeTimeInfo.driverAsstReqPk", "to": "string"}}
]},
"ns": 1,
"city": "$updatedHomeTimeInfo.city",
"state": "$updatedHomeTimeInfo.state",
"zipCode": "$updatedHomeTimeInfo.zip"
}
}
]
with collection.watch(pipeline, full_document='updateLookup') as stream:
for change in stream:
pprint(change)
def main():
client = start_client()
change_stream(client)
if __name__ == "__main__":
main()
</code></pre>
<p>It does print the correct projected document if the driver has at least one <code>hometimes.homeTimeInfo</code> record.</p>
<pre><code>{'_id': {'_data': 'XXXXXXX'},
'city': 'JACKSON',
'id': '992224,113',
'mongoEntityLocation': 'driver.hometimes.homeTimeInfo',
'ns': {'coll': 'dev_drivers', 'db': 'driver'},
'state': 'MS',
'zipCode': '39204'}
</code></pre>
<p>However, it doesn't show a change on the first record created. I'm testing by pushing new array elements using the following in <code>mongosh</code>:</p>
<pre class="lang-js prettyprint-override"><code>db.dev_drivers.updateOne(
{ "_id": "992224" },
{
"$push": {
"homeTime.homeTimeInfo": {
"driverAsstReqPk": NumberLong("113"),
"city": "JACKSON",
"state": "MS",
"zip": "39204",
}
}
}
);
</code></pre>
<p>I've tried changing the <code>"$and"</code> to an <code>"$or"</code> between the <code>insert</code> and <code>update</code> statements. That does produce a change on the first element that's inserted, but the id,city,state,zip are empty/null.</p>
<pre><code>{'_id': {'_data': '8268FXXXXXX'},
'id': None,
'mongoEntityLocation': 'driver.hometimes.homeTimeInfo',
'ns': {'coll': 'dev_drivers', 'db': 'driver'}}
</code></pre>
<p>I'm open to a simplified query if that's a solution. My goal is to project the record shown above, with the <code>id</code> being a coma separated key of the main document and the <code>driverAsstReqPk</code> with the city, state and zip fields present, whenever a <code>hometime.homeTimeInfo</code> element inserted or updated.</p>
|
<python><mongodb><mongodb-query>
|
2025-10-16 14:07:30
| 0
| 2,798
|
djsumdog
|
79,792,205
| 577,288
|
threading.Thread cannot catch a PyCharm "stop" button termination event
|
<p>I am using Pycharm. When I click the "stop" button, the <code>subprocess.Popen</code> deletes the left over <code>.bat</code> files. But after putting the <code>subprocess.Popen</code> inside a <code>threading.Thread</code> function, the <code>.bat</code> files are no longer deleted.</p>
<p>Here is some example code:</p>
<pre><code>def threads_(title, index):
#Create and process .Bat
with open('test'+str(index)+'.bat', "w", encoding='utf-8') as a: a.write('executable_program.py -start_test "'+title+'"')
p1 = subprocess.Popen('test'+str(index)+'.bat', stdout=subprocess.PIPE, text=True)
try:
while p1.poll() is None:
time.sleep(0.5)
_output = p1.stdout.readline()
except KeyboardInterrupt as e:
#Cleanup on Exit
print('Closing ...')
for c in psutil.Process(p1.pid).children(recursive=True): c.kill()
psutil.Process(p1.pid).kill()
print('Cleanup Done')
os.remove('test'+str(index)+'.bat')
quit()
def start_threads():
remaining = ['thread_test1', 'thread_test2', 'thread_test3']
Globals.thread1 = [Globals.thread1.append(None) for index in range(0, 3)]
for index in range(0, 3):
Globals.thread1[index] = threading.Thread(name='thread' + str(index), target=threads_, args=(remaining[index], index))
Globals.thread1[index].start()
start_threads()
</code></pre>
<p>How can I make the <code>threading.Thread</code> function delete files when I push PyCharm's "stop" button?</p>
<p>I also cannot use the <code>join()</code> function when starting the threads.</p>
<p>The following code works with a tkinter <code>root.mainloop()</code>, but it only works like 40% of the time. It doesn't always trigger the <code>handler_stop_signals</code> on exit.</p>
<pre><code>def handler_stop_signals(signum, frame):
print("Hello world")
def main_thread():
Globals.root = tk.Tk()
Globals.root.geometry("+{}+{}".format(650, 50))
Globals.root.geometry("{}x{}".format(200, 200))
Globals.root.mainloop()
signal.signal(signal.SIGINT, handler_stop_signals)
signal.signal(signal.SIGTERM, handler_stop_signals)
main_thread()
</code></pre>
|
<python><subprocess><python-multithreading>
|
2025-10-16 13:35:07
| 2
| 5,408
|
Rhys
|
79,792,087
| 12,415,855
|
ScrollDown using Selenium
|
<p>i try to scrolldown on a website using selenium with the following code:
(i would like to scroll down on the left side of the website where you can see the entries)</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
import time
link = "https://www.bing.com/maps/search?style=r&q=auto+IN+Vienna&srs=sb&cp=48.209085%7E16.392537&lvl=12.2"
options = Options()
options.add_argument("start-maximized")
srv=Service()
driver = webdriver.Chrome (service=srv, options=options)
driver.get (link)
time.sleep(3)
driver.execute_script("window.scrollBy(0, 10000)")
input("Press!")
</code></pre>
<p>But it simply stay on the very top after running the code:</p>
<p><a href="https://i.sstatic.net/zOJDNlg5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOJDNlg5.png" alt="enter image description here" /></a></p>
<p>How can i scroll down using selenium to get more results on this site?</p>
|
<python><selenium-webdriver>
|
2025-10-16 11:29:28
| 2
| 1,515
|
Rapid1898
|
79,791,803
| 3,617,866
|
How do I disable autocommit and make a batch of put operations atomic?
|
<p>I’m using GridDB Cloud (Free) with the Python client, and I need to write a batch of rows to a TimeSeries container atomically (all-or-nothing). I'm having trouble figuring out how to turn autocommit off and commit/rollback explicitly.</p>
<p><strong>Minimal repro:</strong></p>
<pre><code>import griddb_python as griddb
from datetime import datetime, timezone
# connect
factory = griddb.StoreFactory.get_instance()
store = factory.get_store(
host="xxx.cloud.griddb.com",
port=31999,
cluster_name="KN3yxw8u",
username="userM016Pqo8pS",
password="****",
)
# create/open timeseries
props = griddb.TimeSeriesProperties()
props.row_key = True
props.set_column_info([
("ts", griddb.Type.TIMESTAMP),
("deviceid", griddb.Type.STRING),
("temperature", griddb.Type.DOUBLE),
])
ts = store.put_time_series("TSDB", props)
# --- transaction attempt ----------------------------------------------
# Q: Where do I disable autocommit?
# ts.set_auto_commit(False) ? store.set_auto_commit(False) ?
rows = [
(datetime(2025, 9, 1, 0, 0, tzinfo=timezone.utc), "dev-001", 25.0),
(datetime(2025, 9, 1, 1, 0, tzinfo=timezone.utc), "dev-001", 26.0),
]
try:
# try to batch insert
for r in rows:
ts.put(r)
# simulate a failure before commit
raise RuntimeError("boom")
ts.commit() # expect not to reach here
except Exception:
# I expected no rows to be visible after this
ts.rollback()
# Observed: after the exception, the new rows are visible,
# which makes it appear as though autocommit was enabled.
</code></pre>
<p><strong>Question:</strong>
How do I turn off autocommit and use explicit <code>commit()/rollback()</code> with the Python GridDB client for a TimeSeries container? What is the correct object/method to call (container vs store), and is there anything special for TimeSeries?</p>
<p><strong>Environment:</strong></p>
<ul>
<li>GridDB Cloud (Free)</li>
<li>griddb_python client</li>
<li>TimeSeries container (ts <code>TIMESTAMP</code> as row key)</li>
</ul>
|
<python><transactions><time-series><griddb>
|
2025-10-16 05:41:00
| 0
| 907
|
Badhon Ashfaq
|
79,791,649
| 17,472,988
|
Linker error when trying to install FFCV on Python/Windows
|
<p>I am trying to <code>pip install</code> <a href="https://github.com/libffcv/ffcv" rel="nofollow noreferrer">FFCV</a> on Python 3.11 / Windows 10 following instructions in their README's Windows section. I initially had MSBuild detection <a href="https://stackoverflow.com/q/79789580">issues</a>, which I resolved and verified by <code>pip install</code> sample packages without dependencies. Now I am getting</p>
<pre><code>DEPRECATION: Building 'ffcv' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'ffcv'. Discussion can be found at https://github.com/pypa/pip/issues/6334
error: subprocess-exited-with-error
python setup.py bdist_wheel did not run successfully.
exit code: 1
[32 lines of output]
G:\dev\AIPY\Anaconda\Lib\site-packages\setuptools\dist.py:483: SetuptoolsDeprecationWarning: Cannot find any files for the given pattern.
!!
********************************************************************************
Pattern 'LICENSE.txt' did not match any files.
By 2026-Mar-20, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
********************************************************************************
!!
for path in sorted(cls._find_pattern(pattern, enforce_match))
libffcv.cpp
./libffcv/libffcv.cpp(38): warning C4244: 'argument': conversion from 'int64_t' to 'int', possible loss of data
./libffcv/libffcv.cpp(38): warning C4244: 'argument': conversion from 'int64_t' to 'int', possible loss of data
./libffcv/libffcv.cpp(39): warning C4244: 'argument': conversion from 'int64_t' to 'int', possible loss of data
./libffcv/libffcv.cpp(39): warning C4244: 'argument': conversion from 'int64_t' to 'int', possible loss of data
./libffcv/libffcv.cpp(40): warning C4244: 'argument': conversion from 'int64_t' to 'int', possible loss of data
./libffcv/libffcv.cpp(40): warning C4244: 'argument': conversion from 'int64_t' to 'int', possible loss of data
./libffcv/libffcv.cpp(40): warning C4244: 'argument': conversion from 'int64_t' to 'int', possible loss of data
./libffcv/libffcv.cpp(40): warning C4244: 'argument': conversion from 'int64_t' to 'int', possible loss of data
./libffcv/libffcv.cpp(49): warning C4244: 'argument': conversion from 'int64_t' to 'long', possible loss of data
./libffcv/libffcv.cpp(98): warning C4244: 'argument': conversion from '__uint64_t' to 'unsigned long', possible loss of data
./libffcv/libffcv.cpp(102): warning C4244: '=': conversion from '__uint64_t' to 'unsigned long', possible loss of data
Creating library build\temp.win-amd64-cpython-311\Release\libffcv\_libffcv.cp311-win_amd64.lib and object build\temp.win-amd64-cpython-311\Release\libffcv\_libffcv.cp311-win_amd64.exp
libffcv.obj : error LNK2001: unresolved external symbol tjTransform
libffcv.obj : error LNK2001: unresolved external symbol tjInitDecompress
libffcv.obj : error LNK2001: unresolved external symbol tjDecompress2
libffcv.obj : error LNK2001: unresolved external symbol tjFree
libffcv.obj : error LNK2001: unresolved external symbol tjInitTransform
build\lib.win-amd64-cpython-311\ffcv\_libffcv.cp311-win_amd64.pyd : fatal error LNK1120: 5 unresolved externals
error: command 'G:\\dev\\MSBuildTools\\VC\\Tools\\MSVC\\14.44.35207\\bin\\HostX64\\x64\\link.exe' failed with exit code 1120
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for ffcv
error: failed-wheel-build-for-install
Failed to build installable wheels for some pyproject.toml based projects
ffcv
</code></pre>
|
<python><windows><pip><setuptools>
|
2025-10-15 22:12:50
| 1
| 1,859
|
PChemGuy
|
79,791,314
| 2,955,583
|
Inheritance of decorated classes
|
<p>I'm using Python decorators to implement common functionality across multiple classes. The problem comes when I want the decorated classes to inherit from each other: <code>super()</code> doesn't seem to delegate to the base class. The following is an MRE:</p>
<pre class="lang-py prettyprint-override"><code>def decorate(cls):
name = cls.__name__
class Obj(cls):
def get_name(self):
return name
def print_names(self):
try:
super().print_names()
except AttributeError:
pass
print(f"Name: {self.get_name()}")
return Obj
@decorate
class A:
pass
@decorate
class B(A):
pass
a = A()
b = B()
a.print_names()
print()
b.print_names()
</code></pre>
<p>This produces the following output:</p>
<pre><code>Name: A
Name: B
Name: B
</code></pre>
<p>Whereas I would expect:</p>
<pre><code>Name: A
Name: A
Name: B
</code></pre>
<p><code>B.__mro__</code> is:</p>
<pre><code>(<class '__main__.decorate.<locals>.Obj'>, <class '__main__.B'>, <class '__main__.decorate.<locals>.Obj'>, <class '__main__.A'>, <class 'object'>)
</code></pre>
<p>So I would expect <code>super().print_names()</code>, when called from <code>b.print_names()</code>, to resolve to <code>B.__mro__[2]</code> and therefore return a name of A.</p>
<p>What's going on here? Is what I'm trying to do possible?</p>
<p><strong>=== EDIT ===</strong></p>
<p>Note also the following two cases. Why is the behaviour different if I assign the name as a class variable and then explicitly reference the element of <code>__mro__</code>?</p>
<p><strong>Example 1</strong></p>
<pre><code>def decorate(cls):
my_name = cls.__name__
class Obj(cls):
name = my_name
def print_names(self):
try:
print(f"Name: {self.__class__.__mro__[2].name}")
except AttributeError:
pass
except IndexError:
pass
print(f"Name: {self.name}")
return Obj
@decorate
class A:
pass
@decorate
class B(A):
pass
a = A()
b = B()
a.print_names()
print()
b.print_names()
</code></pre>
<p>Produces:</p>
<pre><code>Name: A
Name: A
Name: B
</code></pre>
<p><strong>Example 2</strong></p>
<pre><code>def decorate(cls):
my_name = cls.__name__
class Obj(cls):
def __init__(self):
self.name = my_name
def get_name(self):
return self.name
def print_names(self):
try:
super().print_names()
except AttributeError:
pass
except IndexError:
pass
print(f"Name: {self.get_name()}")
return Obj
@decorate
class A:
pass
@decorate
class B(A):
pass
a = A()
b = B()
a.print_names()
print()
b.print_names()
</code></pre>
<p>Produces</p>
<pre><code>Name: A
Name: B
Name: B
</code></pre>
|
<python><inheritance><decorator><python-decorators>
|
2025-10-15 15:00:08
| 3
| 452
|
jezza
|
79,791,237
| 17,441,006
|
Weaviate Langchain gets stuck on document upload
|
<p>Trying to upload documents to a local Weaviate client. However, it gets stuck on <code>add_document</code>.</p>
<p>Here's my simple stripped down function.</p>
<pre><code>from langchain_weaviate import WeaviateVectorStore
from langchain.schema import Document
from langchain_openai import OpenAIEmbeddings
def upload_chunks_to_weaviate(client, documents, collection_name):
embedder = OpenAIEmbeddings()
vectorstore = WeaviateVectorStore(
client=client,
index_name=collection_name,
text_key="text",
embedding=embedder,
)
doc = Document(page_content="hello world from LangChain + Weaviate")
vectorstore.add_documents([doc])
print(f"Uploaded 1 document to '{collection_name}'")
</code></pre>
<p>Here's where I'm calling that function from:</p>
<pre><code>if __name__ == "__main__":
client = weaviate.connect_to_local(
skip_init_checks=True,
additional_config=AdditionalConfig(trust_env=True,
timeout=Timeout(init=30, query=60, insert=120) # Values in seconds
))
# Load code from repo
data = loader.read_file('repo_data/example.dart')
# Chunking strategy: "ast", "lines", or "tokens"
strategy = "ast"
kwargs = {}
if strategy == "lines":
kwargs["n_lines"] = 20
elif strategy == "tokens":
kwargs["max_tokens"] = 200
# Chunk the code
chunks = chunk_code(data, strategy=strategy, **kwargs)
upload.upload_chunks_to_weaviate(
client=client,
documents=chunks,
collection_name="RepoChunk",
)
client.close()
</code></pre>
<p>It get's stuck on <code>add_documents</code>, I've tried various options, but with no luck. I'm not getting an error message or anything, it just keeps running and I have to signal it to close.</p>
<p>I'm using LangChain 0.3.27, langchain-weaviate 0.0.5.</p>
|
<python><langchain><weaviate>
|
2025-10-15 13:52:24
| 1
| 543
|
Coder
|
79,791,112
| 1,230,477
|
How to make Playwright to connect to existing/opened Camoufox browser at certain port
|
<p><strong><a href="https://deepwiki.com/daijro/camoufox" rel="nofollow noreferrer">Camoufox</a></strong></p>
<p>Given: the code <strong>opens Camoufox server and launches a browser at a certain port</strong>:</p>
<pre><code>def automate_browser(server_url, browser_name):
try:
print(f"Connecting to {browser_name}...")
# Connect to the server using Playwright
with sync_playwright() as p:
browser = p.firefox.connect(server_url)
page = browser.new_page()
...
except Exception as e:
print(f"Error in {browser_name} automation: {e}")
server_thread = threading.Thread(target=run_server, args=(port, path))
# Create browser automation thread
browser_url = f"ws://localhost:{port}/{path}"
# browser_url = ws://localhost:25501/browser25501
browser_thread = threading.Thread(
target=automate_browser,
args=(browser_url, name)
)
</code></pre>
<p>My <strong>attempt to connect to the existing/opened one</strong>:</p>
<pre><code>ws_url= "ws://localhost:25501/browser25501"
browser = await p.firefox.connect(ws_url)
print("✅ Connected successfully!")
version = browser.version
print(f"🛠️ Browser version: {version}")
# Get browser contexts
contexts = browser.contexts
print(f"📋 Found {len(contexts)} contexts")
</code></pre>
<p>The current code print "Found 0 contexts" thus <strong>failing to connect to the opened browser</strong>.<br />
How to <strong>make Playwright to connect to existing/opened browser at certain port</strong>, eg. 25501 (Web Socket URL: ws://localhost:25501/browser25501 )</p>
<p><strong>Any way out ?</strong></p>
<p>Could it be a <strong>problem of a [server] thread</strong>, where it's not possible to enter into/ connect to ?</p>
|
<python><browser><server><playwright><camoufox>
|
2025-10-15 11:26:51
| 0
| 6,351
|
Igor Savinkin
|
79,791,079
| 1,725,553
|
twine error with recent setuptools build but previous ones fine "InvalidDistribution: Metadata is missing required fields: Name, Version"
|
<p>I'm trying to upload the python module I maintain to pypi (as I have done every few months for the last twelve years) but when I run twine check I get:</p>
<pre class="lang-bash prettyprint-override"><code>$ twine check dist/pi3d-2.55*
Checking dist/pi3d-2.55-py3-none-any.whl: ERROR InvalidDistribution: Metadata is missing required fields: Name, Version.
Make sure the distribution includes the files where those fields are specified, and is using a supported Metadata-Version: 1.0, 1.1, 1.2, 2.0, 2.1,
2.2.
</code></pre>
<p>the module is <a href="https://github.com/tipam/pi3d/releases" rel="nofollow noreferrer">https://github.com/tipam/pi3d/releases</a> and if you look at the latest <code>v2.55</code> that I'm trying to publish now compared with <code>v2.54</code> published 2025/03/08 there are only tiny changes to a couple of files.</p>
<p>I hadn't upgraded <code>setuptools</code>, <code>twine</code> or <code>pkginfo</code> that I'm aware of (I've reinstalled --upgrade and various specific versions since first getting this error, of course)</p>
<p>The earlier builds in my <code>pi3d/dist</code> don't give the twine check error so I think it must be something to do with setuptools. I've tried all the suggestions I've found googling around the error message but the only thing I can find is that the wheel contains <code>pi3d-2.55-py3-non-any/pi3d-2.55.dist-info/METADATA</code></p>
<pre class="lang-yaml prettyprint-override"><code>Metadata-Version: 2.4
Name: pi3d
Version: 2.55
Summary: pi3d OpenGL 3D graphics library
Author: Tim Skillman, Paddy Gaunt, Tom Ritchford
Maintainer: Paddy Gaunt
License-Expression: MIT
Project-URL: Homepage, http://pi3d.github.com/html/index.html
Keywords: OpenGL,3D,r...
</code></pre>
<p>i.e. the Metadata-Version is 2.4 which isn't in the list given by the ERROR message! (Previous wheels have 2.2 and don't give the twine error)</p>
<p>Any clue what the actual problem is and how to solve it?</p>
<p>Paddy</p>
<p>PS EDIT if I change the content of the <code>.whl</code> file to set <code>Metadata-Version: 2.2</code> twine passes it OK but I get an error from the server that it doesn't contain a <code>WHEEL</code>. There seems to be a file with that name at the location it gives.</p>
<pre class="lang-none prettyprint-override"><code>$ twine upload dist/pi3d-2.55*.whl --verbose
Uploading distributions to https://upload.pypi.org/legacy/
INFO dist/pi3d-2.55-py3-none-any.whl (307.5 KB)
INFO Querying keyring for password
Enter your API token:
INFO username: __token__
INFO password: <hidden>
Uploading pi3d-2.55-py3-none-any.whl
100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 319.9/319.9 kB • 00:02 • 193.1 kB/s
INFO Response from https://upload.pypi.org/legacy/:
400 Invalid distribution file. WHEEL not found at pi3d-2.55.dist-info/WHEEL
INFO <html>
<head>
<title>400 Invalid distribution file. WHEEL not found at pi3d-2.55.dist-info/WHEEL</title>
</head>
<body>
<h1>400 Invalid distribution file. WHEEL not found at pi3d-2.55.dist-info/WHEEL</h1>
The server could not comply with the request since it is either malformed or otherwise incorrect.<br/><br/>
Invalid distribution file. WHEEL not found at pi3d-2.55.dist-info/WHEEL
</body>
</html>
ERROR HTTPError: 400 Bad Request from https://upload.pypi.org/legacy/
Invalid distribution file. WHEEL not found at pi3d-2.55.dist-info/WHEEL
</code></pre>
|
<python><setuptools><pypi>
|
2025-10-15 10:52:20
| 1
| 2,277
|
paddyg
|
79,791,017
| 17,902,018
|
Cannot get token logprobs while using langchain structured output
|
<p>I am using langchain to call an LLM and I want to get the logprobs for each token.
I want to get them after doing this:</p>
<pre><code>from langchain_openai import ChatOpenAI
from pydantic import BaseModel
class ResponseFormat(BaseModel):
question_index: int
answer: str
short_answer: str
llm = ChatOpenAI(
openai_api_base="...",
openai_api_key="...",
model="...")
structured_llm = llm.with_structured_output(ResponseFormat, method="json_schema")
msg = llm.invoke(("human", "how are you today?"))
# ... there is no response_metadata
</code></pre>
<p>I tried adding <code>.bind(logprobs=True)</code> on both <code>llm</code> and <code>structured_llm</code>, but the result is the same.</p>
<p>The issue is known and described <a href="https://github.com/langchain-ai/langchain/discussions/29665" rel="nofollow noreferrer">here</a> and <a href="https://github.com/langchain-ai/langchain/issues/29668" rel="nofollow noreferrer">here</a> but still the suggestion of adding <code>include_raw</code> doesn't work:</p>
<pre><code>structured_llm = llm.with_structured_output(ResponseFormat, method="json_schema", include_raw=True)
msg = structured_llm.invoke(("human", "how are you today?"))
# ... msg["raw"].response_metadata["logprobs"] is None
</code></pre>
<p>The only reason I could think of is that I am contacting a LiteLLM proxy that in turn contacts azure/openai models and returns me the response, but I am surprised this case isn't discussed anywhere.</p>
<p>Details:</p>
<ul>
<li>python version == 3.10.5</li>
<li>langchain-openai version == 0.2.1</li>
<li>langchain version == 0.3.2</li>
<li>pydantic version == 2.11.7</li>
</ul>
|
<python><openai-api><langchain><large-language-model><litellm>
|
2025-10-15 09:53:09
| 1
| 2,128
|
rikyeah
|
79,790,995
| 14,566,295
|
Failing to fill empty date values with numpy nan
|
<p>I have below code</p>
<pre><code>import pandas as pd
import numpy as np
dat = pd.DataFrame({'A' : [1,2,3,4,5], 'B' : ['2002-01-01', '2003-01-01', '2004-01-01', '2004-01-01', '2005-01-01']})
dat['B'] = pd.to_datetime(dat['B'])
dat['A'] = np.where(
dat['A'].isin([1,2]),
(dat['B'] + pd.DateOffset(months = 12)),
np.where(
dat['A'].isin([3,4]),
(dat['B'] + pd.DateOffset(months = 10)),
np.nan))
</code></pre>
<p>Basically, I would like to fill the empty date fields with <code>np.nan</code>. However with above code, I am getting below error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 4, in <module>
numpy.exceptions.DTypePromotionError: The DType <class 'numpy.dtypes.DateTime64DType'> could not be promoted by <class 'numpy.dtypes.Float64DType'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is `object`. The full list of DTypes is: (<class 'numpy.dtypes.DateTime64DType'>, <class 'numpy.dtypes.Float64DType'>)
</code></pre>
<p>Could you please help to resolve this error?</p>
|
<python><pandas>
|
2025-10-15 09:28:03
| 2
| 1,679
|
Brian Smith
|
79,790,744
| 3,735,871
|
Unable to use Airflow variable with function in Jinja template
|
<p>I'm trying to pass Airflow <code>logical_date</code> to dbt model so that I can use it in the model (sql). I'm using Airflow 2.11.0. I'm doing below, but the dag couldn't get constructed, with an error says:</p>
<blockquote>
<p>'logical_date' is undefined.</p>
</blockquote>
<p>However it works if I remove the <code>in_timezone</code> conversion.
How should I do this? It works in earlier version of Airflow. Thanks.</p>
<pre><code>dbt_task = DbtTaskGroup (
select = ["mymodel"]
operator_args{
"vars": {
"logical_date_to_use_in_model" : "{{logical_date.in_timezone('America/Vancouver')}}"
},
},
)
</code></pre>
|
<python><templates><airflow><jinja2><dbt>
|
2025-10-15 02:14:32
| 1
| 367
|
user3735871
|
79,790,732
| 395,857
|
How to fix “Expected all tensors to be on the same device” when running inference with Qwen3-VL-4B-Instruct?
|
<p>I am trying to run the <a href="https://huggingface.co/Qwen/Qwen3-VL-4B-Instruct" rel="nofollow noreferrer">code example</a> for run some inference on the model <a href="https://huggingface.co/Qwen/Qwen3-VL-4B-Instruct" rel="nofollow noreferrer">Qwen/Qwen3-VL-4B-Instruct</a> model:</p>
<pre><code>from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
# default: Load the model on the available device(s)
model = Qwen3VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen3-VL-4B-Instruct", dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
# model = Qwen3VLForConditionalGeneration.from_pretrained(
# "Qwen/Qwen3-VL-4B-Instruct",
# dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
processor = AutoProcessor.from_pretrained("Qwen/Qwen3-VL-4B-Instruct")
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
inputs = processor.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt"
)
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
</code></pre>
<p>But I get the error message:</p>
<pre><code>RuntimeError: Expected all tensors to be on the same device,
but got index is on cpu, different from other tensors on cuda:0
(when checking argument in method wrapper_CUDA__index_select)
</code></pre>
<p>How can I fix the issue?</p>
|
<python><huggingface-transformers>
|
2025-10-15 01:28:45
| 2
| 84,585
|
Franck Dernoncourt
|
79,790,594
| 1,318,266
|
How to resolve python's "Import `ofxparse` could not be resolved"
|
<p>I'm trying to build a small script in Python 3.14 but am hung up at "Import <code>ofxparse</code> could not be resolved". The script is built in Visual Studio Code on Windows 11.</p>
<p>So far:</p>
<ul>
<li><code>pip install ofxparse</code> is successful</li>
<li>First line of script is <code>from ofxparse import OfxParser</code></li>
<li>Python is installed in <code>C:..\AppData\Local\Programs\Python\Python314</code></li>
<li>Environment variables include <code>...\Python\Python314</code>, <code>...\Python\Python314\Scripts\</code>, <code>...\Python\Launcher</code>, <code>...Python\Python314\Lib\site-packages</code></li>
<li><code>..\ofxparse</code> appears in <code>...Python\Python314\Lib\site-packages</code></li>
</ul>
<p>What am I missing?</p>
|
<python><windows><powershell>
|
2025-10-14 20:18:06
| 1
| 4,728
|
geoB
|
79,790,533
| 10,006,534
|
How to write a pandas-compatible, non-elementary expression in narwhals
|
<p>I'm working with the <a href="https://narwhals-dev.github.io/narwhals/" rel="nofollow noreferrer">narwhals</a> package and I'm trying to write an expression that is:</p>
<ol>
<li>applied over groups using .over()</li>
<li>Non-elementary/chained (longer than a single operation)</li>
<li>Works when the native df is pandas</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import narwhals as nw
import polars as pl
df = pl.DataFrame({
"x": [1, 2, 3, 4],
"group": [1, 1, 2, 2],
})
df = nw.from_native(df)
</code></pre>
<p>Elementary expressions work fine for both pandas and polars native dfs:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
nw.col('x').cum_sum().over('group')
)
</code></pre>
<p>Non-elementary (i.e. chained) expressions work fine with polars native dfs, but this throws an error when the df is pandas.</p>
<pre class="lang-py prettyprint-override"><code># works fine here- the native df is polars
df.with_columns(
nw.col('x').cum_sum().shift(1).over('group')
)
# yields an error- the native df is pandas
pd_df = nw.from_native(df.to_pandas())
pd_df.with_columns(
nw.col('x').cum_sum().shift(1).over('group')
)
# Error is NotImplementedError: Only elementary expressions are supported for `.over` in pandas-like backends.
</code></pre>
<p>I tried to find a workaround with a custom function and <code>.map_batches()</code>, but I get an error here too.</p>
<pre class="lang-py prettyprint-override"><code>def some_non_elementary_expression(x):
return x.cum_sum().shift(1)
# Works using the polars API
df.to_native().with_columns(
pl.col('x').map_batches(
some_non_elementary_expression,
).over('group')
)
# with narwhals - yields an error.
df.with_columns(
nw.col('x').map_batches(
some_non_elementary_expression,
).over('group')
)
# InvalidOperationError: Cannot use `over` on expressions which are elementwise (e.g. `abs`) or which change length (e.g. `drop_nulls`).
</code></pre>
<p>Any help would be appreciated. Thank you!</p>
|
<python><pandas><dataframe><python-polars><python-narwhals>
|
2025-10-14 19:07:36
| 1
| 581
|
Slash
|
79,790,517
| 13,132,728
|
Create an incremental suffix for values in a pandas column that have duplicate values in another column
|
<h1>Setup</h1>
<p>I have a dataframe, <code>df</code></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{
'Name':['foo','foo','foo','bar','bar','bar','baz','baz','baz'],
'Color':['red','blue','red','green','green','blue','yellow','orange','red']
}
)
</code></pre>
<pre><code> Name Color
0 foo red
1 foo blue
2 foo red
3 bar green
4 bar green
5 bar blue
6 baz yellow
7 baz orange
8 baz red
</code></pre>
<h1>Desired Output</h1>
<p>I would like to add an enumerating suffix for each <code>Name</code> that has a duplicate <code>Color</code></p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame(
{
'Name':['foo_1','foo','foo_2','bar_1','bar_2','bar','baz','baz','baz'],
'Color':['red','blue','red','green','green','blue','yellow','orange','red']
}
)
</code></pre>
<pre><code> Name Color
0 foo_1 red
1 foo blue
2 foo_2 red
3 bar_1 green
4 bar_2 green
5 bar blue
6 baz yellow
7 baz orange
8 baz red
</code></pre>
<p>As you can see, there is a suffix with an incremental count for each time a <code>Name</code> has a repeat <code>Color</code>. If a <code>Name</code> has a <code>Color</code> only one time, there is no suffix added.</p>
<h1>What I have tried</h1>
<p>I was thinking of taking a <code>.groupby()</code> with an aggregate of <code>.value_counts()</code> to get a total count, and somehow use that to assign the suffixes if necessary. Here is an idea I had that seems very inefficient:</p>
<pre><code># group by name aggregate color value counts
gb = df.groupby(['Name']).agg(Color_count=('Color','value_counts')).reset_index()
# keep only counts that are >1 ie need a suffix
gb = gb.loc[gb.Color_count > 1].copy()
# merge back to original df
df.merge(gb, on=['Name','Color'],how='left').fillna(0)
# from here, somehow start an incremental suffix for nonzero values of `Color_count`...
</code></pre>
|
<python><pandas><dataframe>
|
2025-10-14 18:49:22
| 4
| 1,645
|
bismo
|
79,790,512
| 5,044,950
|
Is the exclude-newer setting in uv sufficient for determinism and reproducibility without a lockfile?
|
<p>The Python package management tool uv has an <a href="https://docs.astral.sh/uv/reference/settings/#exclude-newer" rel="nofollow noreferrer"><code>exclude-newer</code></a> setting, documented as follows:</p>
<blockquote>
<p>Limit candidate packages to those that were uploaded prior to a given point in time.</p>
<p>Accepts a superset of <a href="https://www.rfc-editor.org/rfc/rfc3339.html" rel="nofollow noreferrer">RFC 3339</a> (e.g., <code>2006-12-02T02:07:43Z</code>). A full timestamp is required to ensure that the resolver will behave consistently across timezones.</p>
</blockquote>
<p>Let's say a project sets <code>exclude-newer</code> to some time in the past but does not have a <code>uv.lock</code> file, e.g. via <code>.gitignore</code>. Are there any cases in which the set of resolved dependency versions can change over time with <code>exclude-newer</code> where they would not have changed if a <code>uv.lock</code> file were present? More specifically, are there any such cases when no package indices besides PyPI are used?</p>
|
<python><pypi><uv>
|
2025-10-14 18:37:09
| 0
| 13,402
|
Sam Estep
|
79,790,457
| 9,801,811
|
Mixture Model of genextreme in Scipy
|
<p>I'm trying to develop a mixture of <code>genextreme</code> distributions in Scipy using the following code.</p>
<pre><code>from scipy.stats import genextreme
from scipy.stats import Mixture
eva_1 = {'c': -0.48, 'loc': 38.82, 'scale': 17.18}
eva_2 = {'c': -0.57, 'loc': 26.44, 'scale': 4.69}
gev_1 = genextreme(**eva_1)
gev_2 = genextreme(**eva_2)
gev_mixture = Mixture(
components=[gev_1, gev_2],
weights=[0.3, 0.7]
)
</code></pre>
<p>This resulted in this error: <em>ValueError: Each element of <code>components</code> must be an instance of <code>ContinuousDistribution</code>.</em></p>
<p><code>type(gev_1)</code> yields <code>scipy.stats._distn_infrastructure.rv_continuous_frozen</code> Are you aware of any approach to construct a mixture of <code>genextreme</code> distributions in Scipy?</p>
|
<python><scipy><mixture-model>
|
2025-10-14 17:30:10
| 1
| 447
|
PPR
|
79,790,431
| 8,522,013
|
How do I print to application output window when debugging python in Qt Creator IDE?
|
<p>I'm debugging a python project (<code>pyproject</code>) in Qt Creator using built-in debugger.</p>
<p>Python <code>print()</code> outputs only to <code>Debugger Log</code> window, where it's mixed with a lot of actual debugger output making it very hard to find the output.</p>
<p>Is that behavior expected or there's an issue with my environment?</p>
<p>Is it possible to somehow output text to <code>Application Output</code> or at least a terminal window?</p>
<h1>Additional details</h1>
<ul>
<li>I've tried running QtCreator 4.11.0(apt) and 13.0.2(snap) on Ubuntu 20.04.6, problem exists on both</li>
<li>If I run the project without debugging, python <code>print()</code> outputs to <code>Application Output</code> window correctly</li>
<li>The built-in python debugger runs <code>pdb</code> via Qt's <code>pdbbridge.py</code> and otherwise works ok: breaks on breakpoints, shows variables</li>
<li><code>sys.stderr.write("test123")</code> also outputs only to <code>Debugger log</code> complaining <code>Unexpected pdb stderr: test123</code></li>
<li><code>Application Output</code> window only shows <code>Debugging starts</code> and <code>Debugging has finished</code></li>
<li>If <code>Run in terminal</code> is enabled - the terminal window is blank</li>
<li>C++ and QML debuggers do output to <code>Application Output</code> without issues</li>
<li><a href="https://stackoverflow.com/questions/7152762/how-to-redirect-print-output-to-a-file">Redirecting print output to a file</a> breaks debugger until STDOUT redirection is disabled, file is also written only upon <code>close()</code> despite <code>PYTHONUNBUFFERED=1</code></li>
</ul>
<p>Thanks in advance, any suggestions or workarounds are welcome.</p>
|
<python><qt-creator><pdb><qtpy>
|
2025-10-14 16:56:42
| 0
| 928
|
Jack White
|
79,790,417
| 8,245,400
|
transformation logic for real-time inference service
|
<p>I have developed an XGBoost model, the data transformations where done using Pandas for training the model. For real-time inference the data comes from the HTTP request single object/record that should be transformed using the same logic that was used on Pandas dataframe for training and then model is invoked to score the record. For this when I converted the HTTP request object into Pandas Dataframe it was taking a lot of time. To solve this problem, I re-wrote the transformation logic using Numpy.</p>
<p>Is there a better solution to handle this problem? Will using a feature store avoid re-writing the transformation logic using Numpy?</p>
|
<python><pandas><numpy><machine-learning>
|
2025-10-14 16:42:14
| 0
| 417
|
Raj
|
79,790,365
| 4,503,546
|
Index in to two specific dates on Pandas dataframe
|
<p>I have a pandas dataframe where the index is datetime. I learned that I can index in to a specific date using this code:</p>
<pre><code>selected_date_df = df.loc['yyyy-mm-dd']
</code></pre>
<p>I can also find data between two dates with this code:</p>
<pre><code>date_range_df = df.loc['yyyy-mm-dd':'yyyy-mm-dd']
</code></pre>
<p>But I have been unable to figure out how to get exactly two dates that are not sequential. For example, say I want to look at 2025-10-13 and 2025-09-15, is it possible to access two specific dates and remove all other parts of the dataframe?</p>
<p>If not, potentially I could specify the start date and end date when grabbing data and then index in to just the first and last rows of the dataframe achieving the same end goal if no code exists to access two specific non-sequential dates?</p>
<p>My end goal is to calculate performance between two dates which can change and I'd prefer to do it in Python versus downloading a csv to Excel and using Vlookups.</p>
|
<python><pandas>
|
2025-10-14 15:47:14
| 1
| 407
|
GC123
|
79,790,303
| 5,552,507
|
pip install private package from Gitlab private pypi, in conda install environment, inside docker
|
<p>My internal packages are stored in a Gitlab private pypi simple.</p>
<p>I use conda env with <code>environment.yml</code> definition, in which some are install with pip:</p>
<pre class="lang-yaml prettyprint-override"><code># environment.yml
name: test_project
channels:
- conda-forge
- defaults
dependencies:
- python==3.12
- pip
- colorama
- pip:
- my_package_on_gitlab_private_pypi
</code></pre>
<p>To deploy on Gitlab, I use a Docker environment. I would like it to be able to install from my private pypi simple. For now <code>DockerFile.project_env</code> is as such:</p>
<pre class="lang-none prettyprint-override"><code>FROM continuumio/miniconda3
# Copy env from context to WORKDIR
COPY environment.yml .
# Build conda environment and rename
# Conda environment conda is set to `project_env`
RUN conda env update -n project_env -f environment.yml && \
conda clean -afy
# Instal useful tools for CI/CD in conda env
RUN conda run -n project_env pip install --upgrade pip && \
conda run -n project_env pip install build twine pytest
# Implicit activationof conda env in next jobs
SHELL ["conda", "run", "-n", "project_env", "/bin/bash", "-c"]
# Always use conda run in next scripts
# to guaranty that all commands executed in GitLab CI with this
# image (ex : pytest, python, twine, etc.) will be in conda env
# `project_env` without the need to explicitly activate it.
ENTRYPOINT ["conda", "run", "--no-capture-output", "-n", "project_env"]
CMD ["bash"]
</code></pre>
<p>The problem is that it does not provide pip with the information about private pypi location.</p>
<p>On my (Windows) laptop I added a <code>pip.ini</code> file in folder <code>%APPDATA%\pip</code>.</p>
<pre><code>[global]
extra-index-url = https://<USERNAME>:<TOKEN>@gitlab.com/api/v4/groups/<GROUP_ID>/-/packages/pypi/simple
</code></pre>
<p>and it works well.</p>
<p><strong>How to tell the same to pip inside conda installer inside docker?</strong></p>
<p>I did not find a way to pass <code>--url-index</code> directly to pip, as I don't want to hardcode username and token in <code>environment.yml</code>.</p>
<p>Maybe that can be done at Gitlab CI stage as a script? Here is an extract of my <code>.gitlab-ci.yml</code> file centered on dockerizing:</p>
<pre class="lang-yaml prettyprint-override"><code>
stages:
- dockerize
dockerize-env:
stage: dockerize
image: docker:latest
services:
- docker:dind
before_script: []
script:
- echo "$CI_JOB_TOKEN" | docker login -u gitlab-ci-token --password-stdin "$CI_REGISTRY"
- |
ENV_HASH=$(sha256sum environment.yml | cut -d' ' -f1 | cut -c1-12)
export IMAGE_TAG="$CI_REGISTRY_IMAGE:env-$ENV_HASH"
echo "🔎 Hash de environment.yml : $ENV_HASH"
echo "🔧 Image Docker attendue : $IMAGE_TAG"
if docker pull "$IMAGE_TAG" 2>/dev/null; then
echo "✅ Image déjà construite : $IMAGE_TAG"
else
echo "🚧 Construction d'une nouvelle image conda avec l'env..."
mkdir -p DockerContext
cp environment.yml DockerContext/environment.yml
cp DockerFile.project_env DockerContext/DockerFile.project_env
docker build -t "$IMAGE_TAG" -f DockerContext/DockerFile.project_env DockerContext
docker push "$IMAGE_TAG"
echo "🚀 Image publiée : $IMAGE_TAG"
fi
- echo "DOCKER_IMAGE_NAME_TAG=$CI_REGISTRY_IMAGE:env-$ENV_HASH" >> env_vars
artifacts:
reports:
dotenv: env_vars
</code></pre>
|
<python><docker><pip><conda>
|
2025-10-14 14:41:56
| 0
| 307
|
PiWi
|
79,790,296
| 11,530,571
|
Dependency Hell in Google Colab
|
<p>I was trying to train a NN in Google Colab with Pytorch and then convert it to TFlite. It seems that everything works only with Python 3.11. I was able to find a "switch" that returns Colab to Python 3.11, so it seemed good... until I tried to install anything. I cannot install anything without triggering dozens of incompatibilities between libraries. Does there even exist any stable configuration that supports Pytorch and TFlite simultaneously, allowing conversion between them? Here's my best shot after two weeks of hell (but still not working):</p>
<pre><code>!pip install --force-reinstall -q "jedi>=0.16" "requests==2.32.3" "rich==13.7.1" "fsspec==2025.3.2" "pydantic==2.11.0" "packaging==24.2" "markdown-it-py==3.0.0" "tensorflow==2.18.0" "spacy==3.8.2" "thinc>=8.3.4,<8.4.0" "numpy<2.0,>=1.24.0" "torch==2.3.1" "torchvision==0.18.1" "torchmetrics" "torchinfo" "torchview" "onnx==1.15.0" "onnxruntime==1.16.3" "ml-dtypes<0.5.0" "opencv-python-headless<4.10" "torchaudio==2.3.1" "onnx-tf==1.10.0"
</code></pre>
<p>P.S. Switch to old Python: Runtime > Change Runtime Type > Runtime Version set to 2025.07</p>
<p>P.P.S I convert through onnx.</p>
<p><strong>Idea 1 (thanks to Frede)</strong>
I tried to use poetry</p>
<pre><code>!pip install -q poetry
!poetry config virtualenvs.create false
!poetry env use 3.11.13
!poetry init --no-interaction
!poetry add torch torchmetrics torchinfo torchview torchvision\
onnx onnxruntime onnx-tf\
tensorflow tensorflow-probability tflite-runtime keras --python 3.11
</code></pre>
<p>Packages were installed, but <code>import onnx_tf</code> fails due to versions incompatibility. Additionally I tried to pin versions for some of them (1.14.0 for onnx and 2.15 for tensorflow) but still no luck.</p>
|
<python><machine-learning><google-colaboratory>
|
2025-10-14 14:37:16
| 0
| 456
|
guest
|
79,790,263
| 12,945,785
|
How to read a Microsoft SQL Data with Polars
|
<p>I would like to read a database with Polars and benefit from his speed vs Pandas.
Now I use this function to read db with pandas. So my question is simple how to convert it with polars and get something rapid (my database has millions of rows).</p>
<pre><code>def lecture_bdd(DATABASE, table=None, colonnes=None, condition=None, query=None):
SERVER = "XXXXXX.windows.net"
USERNAME = "YYYYYYYYYY"
PASSWORD = "ZZZZZZZZZZ"
DRIVER = "ODBC+Driver+17+for+SQL+Server"
connection_string = f"mssql+pyodbc://{USERNAME}:{PASSWORD}@{SERVER}:1433/{DATABASE}?driver={DRIVER}"
engine = create_engine(connection_string)
# Utiliser query complet si fourni
if query:
final_query = query
else:
if not table:
raise ValueError("Le nom de la table est requis si aucun 'query' n'est fourni.")
final_query = f"SELECT {', '.join(colonnes) if colonnes else '*'} FROM {table}"
if condition:
final_query += f" WHERE {condition}"
# Chargement optimisé par accumulation de chunks
chunks = []
for chunk in pd.read_sql(final_query, engine, dtype_backend="pyarrow", chunksize=100_000):
chunks.append(chunk.dropna(how='all', axis=1))
# chunks = [chunk for chunk in chunks if not chunk.empty]
df_final = pd.concat(chunks, ignore_index=True) if chunks else pd.DataFrame()
return df_final
</code></pre>
|
<python><sql-server><dataframe><python-polars>
|
2025-10-14 14:12:02
| 1
| 315
|
Jacques Tebeka
|
79,790,153
| 162,684
|
How to change names of pandas MultiIndex using Styler
|
<p>Let's assume we have the following:</p>
<pre class="lang-py prettyprint-override"><code>midx = pd.MultiIndex.from_product(
[[0, 1], [0, 1], [0, 1]],
names=['L1', 'L2', 'L3'])
df = pd.DataFrame({"col": list(range(8))}, index=midx)
</code></pre>
<p>Now, for visualization purposes, I want the names of the <code>MultiIndex</code> to be <code>['level 1', 'level 2', 'level 3']</code>.</p>
<p>I do not want to change the underlying <code>MultiIndex</code>, because this is only for visualization purposes.</p>
<p>I thought <a href="https://pandas.pydata.org/pandas-docs/version/2.1/reference/api/pandas.io.formats.style.Styler.relabel_index.html#pandas.io.formats.style.Styler.relabel_index" rel="nofollow noreferrer"><code>Styler.relabel_index()</code></a> could help me, and I tried</p>
<pre class="lang-py prettyprint-override"><code>s = df.style
s.relabel_index(labels=['level 1', 'level 2', 'level 3'], axis=0)
</code></pre>
<p>but unfortunately it does not work, and I get this error:</p>
<p><code>ValueError: ``labels`` must be of length equal to the number of visible labels along ``axis`` (8).</code></p>
<p>Obviously this is trying to rename the 8 labels of the index ... but how do I rename the names using the <code>Styler</code>?</p>
<p>(I am using pandas 2.1.4)</p>
|
<python><pandas>
|
2025-10-14 12:17:13
| 2
| 13,583
|
MarcoS
|
79,789,768
| 14,566,295
|
Increase the date by number of months in pandas
|
<p>I have below <code>pandas</code> data frame</p>
<pre><code>import pandas as pd
import numpy as np
dat = pd.DataFrame({'A' : [1,2,3,4,5], 'B' : ['2002-01-01', '2003-01-01', '2004-01-01', '2004-01-01', '2005-01-01']})
dat['A'] = dat['A'].astype('Int64')
</code></pre>
<p>Now I want to create another date column from column <code>B</code> by adding number of months from column <code>A</code>. Below is my code</p>
<pre><code>pd.to_datetime(dat['B'], errors = 'coerce', dayfirst = True) + pd.DateOffset(months = dat['A'])
</code></pre>
<p>However with that, I get below error,</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "offsets.pyx", line 1382, in pandas._libs.tslibs.offsets.RelativeDeltaOffset.__init__
File "offsets.pyx", line 328, in pandas._libs.tslibs.offsets._determine_offset
File "/Users/abc/Python_VENV/lib/python3.12/site-packages/dateutil/relativedelta.py", line 172, in __init__
if any(x is not None and x != int(x) for x in (years, months)):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/abc/Python_VENV/lib/python3.12/site-packages/dateutil/relativedelta.py", line 172, in <genexpr>
if any(x is not None and x != int(x) for x in (years, months)):
^^^^^^
File "/Users/abc/Python_VENV/lib/python3.12/site-packages/pandas/core/series.py", line 248, in wrapper
raise TypeError(f"cannot convert the series to {converter}")
TypeError: cannot convert the series to <class 'int'>
</code></pre>
<p>Could you please help to resolve this?</p>
|
<python><pandas><dataframe>
|
2025-10-14 02:44:22
| 1
| 1,679
|
Brian Smith
|
79,789,608
| 2,751,573
|
How to exclude empty array with to_json
|
<p>Is there any way to exclude null arrays/structs when using to_json? I'm getting something like this coming back: <code>{"arrayColumn":[{}]}</code></p>
<p>Here's a really crude example:</p>
<pre><code> from pyspark.sql.functions import to_json, struct,array,lit
somedata = """
col1
foo
bar
baz
"""
lines = somedata.strip().split('\n')
header = lines[0].split(',')
rows = [line.split(',') for line in lines[1:]]
df = spark.createDataFrame(rows, header)
df2 = df.withColumn("type",struct(array(struct(lit(None).alias("a"),lit(None).alias("b"))).alias("arrayColumn")))
df2.withColumn("msg",struct("col1","type")).select(to_json("msg")).show(truncate = False)
</code></pre>
<p>I am looking to get this back:
<code>{"col1":"foo"}</code>
But instead I'm getting back:
<code>{"col1":"foo","type":{"arrayColumn":[{}]}}</code></p>
|
<python><dataframe><pyspark>
|
2025-10-13 20:09:19
| 1
| 8,893
|
Andrew
|
79,789,580
| 17,472,988
|
Python 'pip install' fails to detect activated MSVC Build Tools
|
<p>When trying to install a package with pip on Windows 10 (Python=3.11.13, pip=25.2, setuptools=80.9.0) via</p>
<pre class="lang-none prettyprint-override"><code>pip install --no-binary :all: pycryptodome
</code></pre>
<p>I get apparently infamous error with not particularly insightful message:</p>
<pre><code> Testing support for clang
Traceback (most recent call last):
File "G:\dev\AIPY\Anaconda\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 389, in <module>
main()
File "G:\dev\AIPY\Anaconda\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 373, in main
json_out["return_val"] = hook(**hook_input["kwargs"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\dev\AIPY\Anaconda\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 143, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pcuser\AppData\Local\Temp\pip-build-env-6vq0slrk\overlay\Lib\site-packages\setuptools\build_meta.py", line 331, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pcuser\AppData\Local\Temp\pip-build-env-6vq0slrk\overlay\Lib\site-packages\setuptools\build_meta.py", line 301, in _get_build_requires
self.run_setup()
File "C:\Users\pcuser\AppData\Local\Temp\pip-build-env-6vq0slrk\overlay\Lib\site-packages\setuptools\build_meta.py", line 317, in run_setup
exec(code, locals())
File "<string>", line 497, in <module>
File "C:\Users\pcuser\AppData\Local\Temp\pip-install-jgu9gndw\pycryptodome_de2d839aee6b4295aed8e8f887f27c7f\compiler_opt.py", line 333, in set_compiler_options
clang = compiler_is_clang()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\pcuser\AppData\Local\Temp\pip-install-jgu9gndw\pycryptodome_de2d839aee6b4295aed8e8f887f27c7f\compiler_opt.py", line 257, in compiler_is_clang
return test_compilation(source, msg="clang")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pcuser\AppData\Local\Temp\pip-install-jgu9gndw\pycryptodome_de2d839aee6b4295aed8e8f887f27c7f\compiler_opt.py", line 82, in test_compilation
objects = compiler.compile([fname], extra_postargs=extra_cc_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pcuser\AppData\Local\Temp\pip-build-env-6vq0slrk\overlay\Lib\site-packages\setuptools\_distutils\compilers\C\msvc.py", line 384, in compile
self.initialize()
File "C:\Users\pcuser\AppData\Local\Temp\pip-build-env-6vq0slrk\overlay\Lib\site-packages\setuptools\_distutils\compilers\C\msvc.py", line 294, in initialize
vc_env = _get_vc_env(plat_spec)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pcuser\AppData\Local\Temp\pip-build-env-6vq0slrk\overlay\Lib\site-packages\setuptools\_distutils\compilers\C\msvc.py", line 155, in _get_vc_env
raise DistutilsPlatformError(
distutils.errors.DistutilsPlatformError: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
</code></pre>
<p>Given that I have recent MSVC Build Tools installed (same shell):</p>
<pre><code>G:\dev\AIPY\Anaconda>cl
Microsoft (R) C/C++ Optimizing Compiler Version 19.44.35217 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
usage: cl [ option... ] filename... [ /link linkoption... ]
</code></pre>
<p>the error message does not help at all. No helpful information in Google, similar SO questions, or from Gemini. So what is wrong and how do I fix it?</p>
|
<python><windows><visual-c++><pip><setuptools>
|
2025-10-13 19:26:23
| 1
| 1,859
|
PChemGuy
|
79,789,573
| 5,972,531
|
List value options for widget configure keys
|
<p>How can I get the <em>value</em> options for config attributes? Especially those that take set values, i.e. anchor, relief, etc.</p>
<p>Call the <code>configure</code> method on some widget, say a label:</p>
<pre><code>label = tk.Label(root, text="Hello World!")
label.configure()
</code></pre>
<p>We get the all attribute option keys, something like:</p>
<pre><code>{
'activebackground': ('activebackground', 'activeBackground', 'Foreground', <border object: 'systemWindowBackgroundColor'>, 'systemWindowBackgroundColor'),
'activeforeground': ('activeforeground', 'activeForeground', 'Background', <color object: 'systemPressedButtonTextColor'>, 'systemPressedButtonTextColor'),
'anchor': ('anchor', 'anchor', 'Anchor', <index object: 'center'>, 'center'),
'background': ('background', 'background', 'Background', <border object: 'systemWindowBackgroundColor'>, 'systemWindowBackgroundColor'),
'bd': ('bd', '-borderwidth'),
'bg': ('bg', '-background'),
'bitmap': ('bitmap', 'bitmap', 'Bitmap', '', ''),
'borderwidth': ('borderwidth', 'borderWidth', 'BorderWidth', 2, 2),
'compound': ('compound', 'compound', 'Compound', <index object: 'none'>, 'none'),
'cursor': ('cursor', 'cursor', 'Cursor', '', ''),
'disabledforeground': ('disabledforeground', 'disabledForeground', 'DisabledForeground', <color object: '#a3a3a3'>, '#a3a3a3'),
'fg': ('fg', '-foreground'),
'font': ('font', 'font', 'Font', <font object: 'TkDefaultFont'>, 'TkDefaultFont'),
'foreground': ('foreground', 'foreground', 'Foreground', <color object: 'systemTextColor'>, 'systemTextColor'),
'height': ('height', 'height', 'Height', 0, 0),
'highlightbackground': ('highlightbackground', 'highlightBackground', 'HighlightBackground', <border object: 'systemWindowBackgroundColor'>, 'systemWindowBackgroundColor'),
'highlightcolor': ('highlightcolor', 'highlightColor', 'HighlightColor', <color object: 'systemTextColor'>, 'systemTextColor'),
'highlightthickness': ('highlightthickness', 'highlightThickness', 'HighlightThickness', 0, 0),
'image': ('image', 'image', 'Image', '', ''),
'justify': ('justify', 'justify', 'Justify', <index object: 'center'>, 'center'),
'padx': ('padx', 'padX', 'Pad', 1, 1),
'pady': ('pady', 'padY', 'Pad', 1, 1),
'relief': ('relief', 'relief', 'Relief', <index object: 'flat'>, 'flat'),
'state': ('state', 'state', 'State', <index object: 'normal'>, 'normal'),
'takefocus': ('takefocus', 'takeFocus', 'TakeFocus', '0', '0'),
'text': ('text', 'text', 'Text', '', 'Hello World!'),
'textvariable': ('textvariable', 'textVariable', 'Variable', '', ''),
'underline': ('underline', 'underline', 'Underline', -1, -1),
'width': ('width', 'width', 'Width', 0, 0),
'wraplength': ('wraplength', 'wrapLength', 'WrapLength', 0, 0)
}
</code></pre>
<p>These look like key/value pairs, but the "values" here are not the allowed options.</p>
<p>It's those allowed options that I want. <em>Many</em> are not obvious.</p>
<p>Say <code>anchor</code> for example, I think it will accept these values:
<code>NW, N, NE, W, CENTER, E, SW, S, SE</code>, but say I really don't know.</p>
<p>How can this be found? For any widget.</p>
<p>I'm trying to create a dropdown list giving these options for a selected widget.</p>
|
<python><tkinter>
|
2025-10-13 19:06:57
| 1
| 970
|
Mote Zart
|
79,789,445
| 11,485,896
|
Jupyter/VSCode does not recognize pip from .venv despite being properly installed
|
<p>EDIT: the issue is resolved (kinda). Check GitHub <a href="https://github.com/ipython/ipykernel/issues/1441" rel="nofollow noreferrer">discussion</a>. It turns out that there are possibly two problematic factors. First (my case), the newest version of Jupyter extension. When I switched back to 2025.8.0 I could finally get back to work... Second factor (case of zoczkos) is <code>ipykernel</code> version. Today on Sep 13th a new version 7.0.0 was released. It can cause problems. 6.30.1 works fine.</p>
<p><strong>EDIT 2: with new updates of both Jupyter VSCode extension to 2025.9.1 and <code>ipykernel</code> to 7.0.1, I can now confirm that all the problems below are fully solved on VSCode 1.105.0.</strong></p>
<hr />
<p>I encountered this strange and the most frustrating issue as below. I am aware that I can receive thumbs down but at the same time I'm equally desperate because I'm trying not to lost my project. Honestly I don't know what to do...</p>
<p>On Friday 10th I left my computer with a couple of windows open - including VSCode with a bunch of Python/Jupyter scripts open. Everything worked completely fine. When I came back home today, I discovered that the computer probably restarted itself or simply I forgot to close/sleep it properly and the battery got empty.</p>
<p><strong>Since restart, I absolutely can't run any Jupyter Notebook using <code>.venv</code> created by VSCode.</strong></p>
<p>The most interesting part of that problem is that on the glance everything is correct - <code>pip</code> and all Jupyter libraries are installed under <code>site-packages</code>, namely <code>notebook</code>, <code>ipykernel</code>, <code>IPython</code> and <code>ipywidgets</code>. However, upon run I always get <code>Running cells with '.venv (Python 3.12.2)' requires the ipykernel and pip package.</code>. Next, when I click <code>Install</code>, I get <code>There is no Pip installer available in the selected environment.</code>.</p>
<p>When I try to run VSCode command <code>Jupyter: Select Interpreter to Start Jupyter Server</code> and select <code>.venv</code>, I always get <code>Running cells with '.venv (Python 3.12.2)' requires the pip, jupyter and notebook package.</code>. Again, I <em>can't</em> install any package through Jupyter extensions because, according to them, <code>pip</code> is not installed at all. Other ways of assigning Jupyter to <code>.venv</code> also fail (e.g. <code>Python: Select Interpreter</code>).</p>
<p>That's total nonsense - the Jupyter kernel <strong>is detected by Jupyter and stored in <code>.venv</code></strong>. Without <code>pip</code> I wouldn't be able to install any library manually which I tried multiple times - especially for <code>ipykernel</code>. I wouldn't even be able to create <code>.venv</code> with <code>requirements.txt</code> that I have. Next, Jupyter output produces this message:</p>
<pre class="lang-none prettyprint-override"><code>Running cells with '.venv (Python 3.12.2)' requires the ipykernel package.
Install 'ipykernel' into the Python environment.
Command: '"d:/path/.venv/Scripts/python.exe" -m pip install ipykernel -U --force-reinstall'
</code></pre>
<p>I even tried copying <strong>this exact command</strong> and running from <code>.venv</code> terminal. It finishes without any problems but still Jupyter does not recognize any needed packages.</p>
<p>I tried the solutions below:</p>
<ol>
<li><a href="https://stackoverflow.com/a/55631501/11485896">https://stackoverflow.com/a/55631501/11485896</a> - <code>.venv</code> already selected. No effect.</li>
<li>VSCode reinstall with <code>.vscode</code> folder deleted. Done, the problem persists.</li>
<li>Python 3.12.2 reinstall. Same.</li>
<li><code>.venv</code> recreation with <code>requirements.txt</code>. Nope.</li>
<li>Settings change (still failed):</li>
</ol>
<pre class="lang-json prettyprint-override"><code> "python.defaultInterpreterPath": ".\\.venv\\Scripts\\python.exe",
"jupyter.allowUnauthorizedRemoteConnection": true,
"python.terminal.activateEnvironment": true,
"python.venvPath": ".\\.venv\\Scripts\\python.exe",
"python.venvFolders": ["venv", ".venv"]
</code></pre>
<ol start="6">
<li><code>py -m ipykernel install --user --name=.venv</code> ran from <code>.venv</code> terminal created virutal environment in <code>C:\Users\user\AppData\Roaming\jupyter\kernels\.venv</code>. Didn't help too.</li>
</ol>
<p>Proof that Jupyter sees kernels (<code>jupyter kernelspec list</code>):</p>
<pre><code>Available kernels:
python3 d:\path\.venv\share\jupyter\kernels\python3
.venv C:\Users\user\AppData\Roaming\jupyter\kernels\.venv
</code></pre>
<p>Jupyter debug's output:</p>
<pre class="lang-none prettyprint-override"><code>>jupyter --version
Selected Jupyter core packages...
IPython : 9.6.0
ipykernel : 6.30.1
ipywidgets : 8.1.7
jupyter_client : 8.6.3
jupyter_core : 5.8.1
jupyter_server : 2.17.0
jupyterlab : 4.4.9
nbclient : 0.10.2
nbconvert : 7.16.6
nbformat : 5.10.4
notebook : 7.4.7
qtconsole : not installed
traitlets : 5.14.3
>jupyter --paths --debug
JUPYTER_PLATFORM_DIRS is set to a false value, or is not set, so we use hardcoded legacy paths for platform-specific directories
JUPYTER_PREFER_ENV_PATH is set to a true value, or JUPYTER_PREFER_ENV_PATH is not set and we detected a virtual environment, making the environment-level path preferred over the user-level path for data and config
JUPYTER_NO_CONFIG is not set, so we use the full path list for config
JUPYTER_CONFIG_PATH is not set, so we do not prepend anything to the config paths
JUPYTER_CONFIG_DIR is not set, so we use the default user-level config directory
Python's site.ENABLE_USER_SITE is not True, so we do not add the Python site user directory 'C:\Users\user\AppData\Roaming\Python'
JUPYTER_PATH is not set, so we do not prepend anything to the data paths
JUPYTER_DATA_DIR is not set, so we use the default user-level data directory
JUPYTER_RUNTIME_DIR is not set, so we use the default runtime directory
config:
d:\path\.venv\etc\jupyter
C:\Users\user\.jupyter
data:
d:\path\.venv\share\jupyter
C:\Users\user\AppData\Roaming\jupyter
runtime:
C:\Users\user\AppData\Roaming\jupyter\runtime
</code></pre>
<p>Any ideas? Please help. I'm very frustrated. Note: I'm not experienced user in Jupyter Notebooks.</p>
<p>Python: 3.12.2
VSCode: 1.105.0
Jupyter extension: 2025.9.0
Windows: 11 24H2 26100.6725</p>
|
<python><visual-studio-code><jupyter-notebook><pip><python-venv>
|
2025-10-13 15:55:31
| 0
| 382
|
Soren V. Raben
|
79,789,430
| 131,433
|
Defining a pydantic dynamic model field in terms of another pydantic dynamic model
|
<p>If I create a dynamic model:</p>
<pre><code>BarModel = create_model(
'BarModel',
apple=(str, 'russet'),
banana=(str, 'yellow')
</code></pre>
<p>And then I want to create another dynamic model with a field of type 'BarModel', do I just use 'BarModel' as the field type? What is the namespace of these things?</p>
|
<python><pydantic><pydantic-v2>
|
2025-10-13 15:35:51
| 0
| 100,613
|
bmargulies
|
79,789,334
| 18,108,367
|
How to test the calling sequence of methods of a class?
|
<p>Sometimes when I develop by TDD (Test Driven Development) I need to test the calling order of some class methods. In general I write Python code so I'll show the last test case that I have just written.</p>
<h3>Production code</h3>
<p>I have a class with 3 methods (in the file <code>mount_manager.py</code>):</p>
<pre><code>class MountManager:
def create_file(self):
...
def format_file(self):
...
def create_and_format(self):
self.create_file()
self.format_file()
</code></pre>
<p>The method <code>create_and_format()</code> is the code under test; in it the method <code>create_file()</code> must be called before the method <code>format_file()</code> (otherwise the calling of the method <code>format_file()</code> fail because the file to format doesn't exist).</p>
<h3>Test code</h3>
<p>With the TDD in mind, before to write the production code, I have written the test code <code>test_create_and_format()</code> with the goal to verify that the order of the methods called by <code>create_and_format()</code> would be correct:</p>
<pre><code>import unittest
from unittest import mock
from mount_manager import MountManager
class MountManagerTest(unittest.TestCase):
..
def test_create_and_format(self):
mock_mount_manager = mock.create_autospec(MountManager)
MountManager.create_and_format(mock_mount_manager)
self.assertEqual('create_file', mock_mount_manager.method_calls[0][0])
self.assertEqual('format_file', mock_mount_manager.method_calls[1][0])
</code></pre>
<p>I know this <a href="https://stackoverflow.com/questions/31913232/how-to-unittest-the-sequence-of-function-calls-made-inside-a-python-fuction">post</a> which is near to mine, but it doesn't work with a class and its methods but with a module and its functions; this difference stops me to apply that method to write my test code.</p>
<h3>Question</h3>
<p>Because in the test I call the method under test, <code>create_and_test()</code>, from the class (see the instruction <code>MountManager.create_and_format()</code> that is a <strong>class attribute reference</strong>) and not from an instance of the class while in the production code I call the method from an instance of the class, I'm looking for a different way to write the test code.</p>
<p>Could someone suggest me an other way to write the test code <code>test_create_and_format()</code></p>
|
<python><tdd><python-unittest><python-mock>
|
2025-10-13 13:49:38
| 1
| 2,658
|
User051209
|
79,789,322
| 589,165
|
Does RDFLib offer static/semantic validation for SPARQL (beyond parseQuery)?
|
<p><code>parseQuery</code> in RDFLib catches syntax errors, but I can not find a way to make RDFLib flag <em>semantic/static</em> issues (e.g., GROUP BY mismatches) before execution on a remote endpoint. <code>translateQuery</code> was suggested as a stricter check, but it does not seem to raise in cases I would expect. Even executing the query on an empty graph produces no Exception.</p>
<pre class="lang-py prettyprint-override"><code>from rdflib import Graph
from rdflib.plugins.sparql.parser import parseQuery
from rdflib.plugins.sparql.algebra import translateQuery
q = """SELECT ?s WHERE {
?s ?p ?o .
}
GROUP BY ?nonexistent"""
ast = parseQuery(q) # succeeds (syntax OK)
algebra = translateQuery(ast) # also succeeds...!?
g = Graph()
for row in g.query(q): # runs on empty graph; no exception...!?
print(row)
</code></pre>
<p>If the SPARQL query is executed on a Blazegraph-based endpoint there is a "Bad aggregate" error, - as I would expect.</p>
|
<python><sparql><rdflib>
|
2025-10-13 13:32:55
| 1
| 6,884
|
Finn Årup Nielsen
|
79,789,192
| 3,584,765
|
Cannot calculate confusion matrix utilizing supervision from roboflow for Yolov8 model
|
<p>I am trying to calculate the confusion matrix for my yolov8 (or yolov11) model utilizing <a href="https://github.com/roboflow/supervision" rel="nofollow noreferrer">supervision</a> from roboflow. I found some instructions but they do not seem to be crystal clear. For example <a href="https://roboflow.com/how-to-create-a-confusion-matrix/yolov8" rel="nofollow noreferrer">these instructions</a> are a bit vague:</p>
<pre><code>import supervision as sv
from ultralytics import YOLO
dataset = sv.DetectionDataset.from_yolo(...)
model = YOLO(...)
def callback(image: np.ndarray) -> sv.Detections:
result = model(image)[0]
return sv.Detections.from_ultralytics(result)
confusion_matrix = sv.ConfusionMatrix.benchmark(
dataset = dataset,
callback = callback
)
confusion_matrix.plot()
</code></pre>
<p>From other sites (e.g. from <a href="https://github.com/roboflow/supervision/issues/626" rel="nofollow noreferrer">this issue report</a> which seem to address the same issue with me) I found out that there are these parameters for the dataset:
<code>images_directory_path</code>, <code>annotations_directory_path</code> and <code>data_yaml_path</code></p>
<p>I modified the code to include this and point to my validation image folder (the subfolder only containing the val images -although pointing to the generic image folder did not help either-), to my annotations (in yolo format, also the subfolder containing only the valiadation annotations) validation folder and to the yaml file of the dataset (to acquire the classes I guess).</p>
<pre><code>dataset = sv.DetectionDataset.from_yolo(images_directory_path=IMAGES_DIR,
annotations_directory_path=ANNOT_DIR,
data_yaml_path=YAML_PATH)
</code></pre>
<p>The model points to the actual model weights in pytorch format.</p>
<p>Unfortunately, this did not produce any valid results. My confusion matric is empty but the classes seem correct. So, I suspect my dataset is not read correctly while the yaml file is read correctly.</p>
<p>Does anyone know how to correctly insert the parameters in the <code>dataset</code> object?</p>
<p>Edit:</p>
<p>I tried to debug the issue. First, I have installed version <code>supervision==0.26.1</code>. I also followed the instruction in <a href="https://github.com/roboflow/supervision/blob/develop/demo.ipynb" rel="nofollow noreferrer">this notebook</a> where I tried to load my dataset.</p>
<p>This seems to work up to the point of
print(dataset.classes)</p>
<blockquote>
<p>['car', 'person', 'truck', 'motor', 'bus', 'bike', 'fire']</p>
</blockquote>
<p>but the next one fails:</p>
<pre><code>IMAGE_NAME = next(iter(dataset.images.keys()))
</code></pre>
<blockquote>
<p>AttributeError: 'DetectionDataset' object has no attribute 'images'</p>
</blockquote>
<p>There seem to be same changes to the interface which prevent the loading of the dataset.</p>
|
<python><machine-learning><object-detection><confusion-matrix><yolov8>
|
2025-10-13 10:50:21
| 1
| 5,743
|
Eypros
|
79,789,132
| 801,924
|
How to deploy a file to the system from a wheel?
|
<p>For given project:</p>
<pre class="lang-none prettyprint-override"><code>.
├── demo
│ └── __init__.py
├── demo.bin
└── setup.py
</code></pre>
<p><code>demo/__init__.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == "__main__":
print("You're in demo")
</code></pre>
<p><code>demo.bin</code>:</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
echo "You're in demo too"
</code></pre>
<p><code>setup.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import setuptools
setuptools.setup(
name="demo",
version="0.1.0",
description="a demo",
author_email="",
packages=["demo"],
data_files=[
("/usr/bin", ["demo.bin"]),
],
)
</code></pre>
<p>How to deploy <code>demo.bin</code> to <code>/usr/bin</code> when wheel is installed?</p>
<p>Following process shows it's not:</p>
<ol>
<li>build with <code>python3 setup.py bdist_wheel</code></li>
<li>open <code>dist/demo-0.1.0-py3-none-any.whl</code>, <code>/usr/bin/demo.bin</code> is in the zip</li>
<li>install (eg. <code>docker run -v "$(pwd)/dist:/tmp/demo -it python:3-bullseye bash</code> then <code>pip install /tmp/demo/demo-0.1.0-py3-none-any.whl</code></li>
<li>wheel correctly installed (can check with <code>python -c "import demo"</code>) but <code>/usr/bin/demo.bin</code> not deployed (can check with <code>ls -l /usr/bin/demo.bin</code>, but we can find it at <code>/usr/local/lib/python3.13/site-packages/usr/bin/demo.bin</code>)</li>
</ol>
|
<python><python-wheel>
|
2025-10-13 09:46:41
| 1
| 7,711
|
bux
|
79,789,087
| 2,695,990
|
How to add Unicode emoji to python script in IntelliJ
|
<p>I have a script which I have created couple of months ago with an older version of Intellij + Python plugin. I had to migrate to new windows and also new IntelliJ
Currently I am using :</p>
<ul>
<li>IntelliJ IDEA 2025.2.1 (Ultimate Edition)</li>
<li>Windows 11</li>
<li>Python plugin of version: 252.25557.131</li>
</ul>
<p>I was using 3 colors of emojis to enhance my logs:
🟪
🟦
🟩</p>
<p>After migration I have realized that neither the python code shows the icons as before, neither the logs are representing those icons in the desired colors.
The simplified code looks like:</p>
<pre><code>if __name__ == "__main__":
print("This is a normal print statement", file=sys.stdout)
print(f"🟪 this was purple")
print(f"🟦 this was blue")
print(f"🟩 this was green")
</code></pre>
<p>and In IntelliJ it shows :
<a href="https://i.sstatic.net/Jp8SWuh2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp8SWuh2.png" alt="enter image description here" /></a></p>
<p>After running the code the output looks:</p>
<p><a href="https://i.sstatic.net/Ma8yXrpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ma8yXrpB.png" alt="enter image description here" /></a></p>
<p>How to see the real colors of those emojis in my python script and how to output them correctly to the console output?</p>
|
<python><intellij-idea><console><emoji>
|
2025-10-13 08:59:24
| 2
| 3,174
|
fascynacja
|
79,788,981
| 13,579,159
|
SQLAlchemy session initialization in Command pattern
|
<p>A question on consideration for <em>case/approach</em> choice.</p>
<p>Let's say we have an app that has an ORM model and also uses simple realisation of the Command pattern. Inside of each command we need to instantiate and manipulate our models and send varying queries to the database, hence we need a session. I suggest two approaches:</p>
<ol>
<li>One session, initialized outside of certain command implementation</li>
<li>Lot of sessions, initialized inside of certain command implementation</li>
</ol>
<h5>Question:</h5>
<p>How do you think: which approach is optimal for what case? <br/> Or, perhaps, one can suggest another ways to handle session(s).</p>
<h5>Example of 1<sup>st</sup> approach:</h5>
<pre class="lang-python prettyprint-override"><code>class Command(ABC):
def __init__(self, engine):
self.session_fabric = sessionmaker(engine)
...
def run(self, *args, **kwargs):
...
try:
# one session is initialized before a certain command execution had begun
with self.session_fabric() as session:
result = self._process(session, *args, **kwargs)
# and this session is closed only after a command execution had completed
except ...:
...
else:
...
return result
...
@abstractmethod
def _process(self, session, /, *args, **kwargs):
...
class FirstCommand(Command):
def _process(self, session, /, prarm1, ...):
...
</code></pre>
<h5>Example of 2<sup>nd</sup> approach:</h5>
<pre class="lang-python prettyprint-override"><code>class Command(ABC):
def __init__(self, engine):
self.session_fabric = sessionmaker(engine)
...
def run(self, *args, **kwargs):
...
try:
result = self._process(*args, **kwargs)
except ...:
...
else:
...
return result
...
@abstractmethod
def _process(self, *args, **kwargs):
...
class FirstCommand(Command):
def _process(self, prarm1, ...):
...
# sessions are initialized and closed as many as needed
with self.session_fabric() as session:
...
...
with self.session_fabric() as session:
...
...
</code></pre>
|
<python><session><design-patterns><sqlalchemy><command-pattern>
|
2025-10-13 06:34:29
| 1
| 341
|
Gennadiy
|
79,788,694
| 3,732,793
|
Simpy not showing other process name
|
<p>I am trying to work my self into simpy. Which uses python generators to create sim steps.</p>
<p>From the documentation for simpy state access.</p>
<pre><code>import simpy
def subfunc(env):
print(env.active_process.name) # will print "my_proc"
def my_proc(env):
while True:
print(env.active_process.name) # will print "my_proc"
yield env.process(subfunc(env))
yield env.timeout(1)
env = simpy.Environment()
p1_result = env.process(my_proc(env))
print(type(env.active_process)) # None
env.step()
print(type(env.active_process)) # None
</code></pre>
<p>Is there an explenation why subfunc is printed out as my_proc ?</p>
|
<python><simpy>
|
2025-10-12 17:46:29
| 2
| 1,990
|
user3732793
|
79,788,676
| 2,894,535
|
How do I change argparse positional argument destination name?
|
<p>I am having trouble using <code>add_argument</code>'s <code>dest</code> parameter with positional arguments. The reason I want this is that I use <code>nargs='*'</code> and it makes sense to me to have the argument name be singular so that its help is printed as <code>[target ...]</code>, but the destination to be plural to match its list type:</p>
<pre class="lang-py prettyprint-override"><code># Public function for use from other scripts
def run(targets: list[str]):
pass
if __name__ == '__main__':
from argparse import ArgumentParser
parser = ArgumentParser()
parser.add_argument('target', dest='targets', nargs='*', default=['.'])
run(**vars(parser.parse_args()))
</code></pre>
<p>raises error:</p>
<pre><code> parser.add_argument('target', dest='targets', nargs='*', default=['.'])
File "/usr/lib/python3.13/argparse.py", line 1452, in add_argument
raise ValueError('dest supplied twice for positional argument')
ValueError: dest supplied twice for positional argument
</code></pre>
<p>I know I could "fix" it by not using <code>vars</code> directly or modifying the resulting dict before passing it to <code>run</code>, but I would prefer doing it directly in <code>argparse</code>'s API as the <code>vars</code> ensures that the CLI has the same API as the script imported as a lib.</p>
|
<python><argparse>
|
2025-10-12 16:59:43
| 1
| 3,116
|
Dominik Kaszewski
|
79,788,575
| 7,240,233
|
More than basic form validation with Django
|
<p>I'm learning Django with a small app allowing people to book houses for lodging. I have two models, describing a house and a booking (I'm currently working without a "Customer" model):</p>
<pre class="lang-py prettyprint-override"><code># models.py
from django.db import models
class Housing(models.Model):
name = models.CharField(max_length=255)
capacity = models.SmallIntegerField()
class Booking(models.Model):
house = models.ForeignKey(Housing, on_delete=models.CASCADE)
arrival_date = models.DateField(auto_now_add=False)
departure_date = models.DateField(auto_now_add=False)
client_id = models.TextField()
nb_travellers = models.SmallIntegerField()
</code></pre>
<p>I also have a ModelForm matching the Booking model, in which a customer can book a house:</p>
<pre class="lang-py prettyprint-override"><code># forms.py
from django.forms import ModelForm
from .models import Booking
class BookingForm(ModelForm):
"""Form to make a booking"""
class Meta:
model = Booking
fields = "__all__"
</code></pre>
<p>In my view, I retrieve the form data, and I'd like to add some validation before adding the new booking instance to the database:</p>
<ul>
<li>arrival_date must be before departure_date</li>
<li>The number of travellers must not be higher then the houe capacity</li>
<li>There must not be an existing booking in the databse which has overlapping dates with the new booking</li>
</ul>
<p>I already have code to compute those, it works in additional testing scripts I made, but I am struggling to integrate it properly in the view. Should I look deeper in Django forms validation documentation ? I read something about a <code>clean</code> method to write directly into the ModelForm class, but I am a bit lost ... Someone got any tip or suited tutorial ?</p>
<pre class="lang-py prettyprint-override"><code>from django.shortcuts import render
from .forms import BookingForm
def booking_form(request):
if request.method == 'POST':
form = BookingForm(request.POST)
if form.is_valid():
form.save()
else:
form = BookingForm()
return render(request, 'home.html', {'form': form})
</code></pre>
|
<python><django><forms><validation>
|
2025-10-12 13:19:59
| 0
| 721
|
Micawber
|
79,788,427
| 7,556,091
|
Why OpenCV does not utilize multithreading for acceleration?
|
<p>My system and OpenCV version information, source code, and runtime results are as follows:
From this, it appears that multithreading is not functioning.</p>
<pre class="lang-bash prettyprint-override"><code>~$ mamba list | grep opencv
libopencv 4.12.0 qt6_py312h322f462_605 conda-forge
opencv 4.12.0 qt6_py312h7bb6282_605 conda-forge
py-opencv 4.12.0 qt6_py312h598be00_605 conda-forge
~$ nproc
64
</code></pre>
<p>sources:</p>
<pre class="lang-py prettyprint-override"><code>#%%
import time
import numpy as np
import cv2
#%%
if __name__ == "__main__":
for line in cv2.getBuildInformation().split("\n"):
if "Parallel framework" in line:
print(line.strip())
threads = cv2.getNumThreads()
cpus = cv2.getNumberOfCPUs()
print(f"thread of cpu: {threads}/{cpus}")
image = np.random.randint(0, 256, (4000, 3000))/255
ksize = (51, 51)
count = 1
print("Box filter")
for i in [0, 1, 2, 4, 8]:
cv2.setNumThreads(i)
print(f"thread: {cv2.getNumThreads()}", end=": ")
t1 = time.time()
for _ in range(count):
cv2.boxFilter(image, cv2.CV_32F, ksize)
d1 = time.time() - t1
print(f"consumed {int(d1*1000)}ms")
print("Gaussian blur")
for i in [0, 1, 2, 4, 8]:
cv2.setNumThreads(i)
print(f"thread: {cv2.getNumThreads()}", end=": ")
t1 = time.time()
for _ in range(count):
cv2.GaussianBlur(image, ksize, 0)
d1 = time.time() - t1
print(f"consumed {int(d1*1000)}ms")
</code></pre>
<p>result:</p>
<pre><code>Parallel framework: OpenMP
thread of cpu: 8/64
Box filter
thread: 1: consumed 62ms
thread: 1: consumed 60ms
thread: 2: consumed 60ms
thread: 4: consumed 59ms
thread: 8: consumed 59ms
Gaussian blur
thread: 1: consumed 610ms
thread: 1: consumed 613ms
thread: 2: consumed 612ms
thread: 4: consumed 651ms
thread: 8: consumed 615ms
</code></pre>
|
<python><multithreading><opencv>
|
2025-10-12 07:31:14
| 1
| 1,896
|
progquester
|
79,788,321
| 2,604,247
|
Angle Embedder in Python Messing Up Logging Config
|
<p>I wrote another question on this earlier, but could not pinpoint the issue on my side, here, I am giving a minimal reproducible code.</p>
<h5>System</h5>
<p>Angle version 0.5.6</p>
<p>UV 0.8.22</p>
<p>Python 3.12</p>
<p>Ubuntu 24.04</p>
<p>I have used the python <code>logging</code> library for sometime, and recently, it is showing some weird behaviour (described below). After some investigation, I realised the surprises were introduced when I was importing <code>angle_emb</code> module (used for text embedding). Here is a minimal code sample.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
import logging
import angle_emb # The problematic import
logging.basicConfig(
format="%(asctime)s|%(levelname)s: %(message)s",
datefmt="%H:%M:%S, %d-%b-%Y",
level=logging.DEBUG,
)
logging.info(msg='This is an information log')
logging.debug(msg='This is a debug log')
</code></pre>
<h3>Expected Output</h3>
<pre class="lang-none prettyprint-override"><code>21:27:39, 11-Oct-2025|INFO: This is an information log
21:27:39, 11-Oct-2025|DEBUG: This is a debug log
</code></pre>
<h3>My Output</h3>
<pre class="lang-none prettyprint-override"><code>INFO:root:This is an information log
</code></pre>
<p>So basically, two issues</p>
<ul>
<li>Debug logs are ignored</li>
<li>The logging timestamp is suppressed</li>
</ul>
<p>Once you remove or comment out the problematic import, you get the expected output.</p>
<p>Am I doing something wrong with the logging library? Or, is there a way to not let <code>angle</code> mess with my logging config? Or is it a bug?</p>
|
<python><nlp><python-logging><text-embedding-ada-002>
|
2025-10-12 01:03:16
| 1
| 1,720
|
Della
|
79,788,256
| 2,252,948
|
Invoking Enter-VsDevShell with pytest prints both sucessful AND failed output...?
|
<p>I've created a python script <code>foo.py</code> which contains a simple pytest test that invokes a powershell script, <code>tmp.ps1</code>. <code>tmp.ps1</code> obtains the Visual Studio installation path, then sources a powershell script within Visual Studio, <code>Launch-VsDevShell.ps1</code>, to setup the environment. This simulates the first few steps needed for a windows C/C++ build in some contexts.</p>
<p>Looking at the underlying <code>Launch-VsDevShell.ps1</code>, it does an <code>Import-Module</code> of a <code>.dll</code>, then invokes <code>Enter-VsDevShell</code>, which I guess is a command that's defined within the <code>.dll</code>. Regardless, this setup works as expected in the following cases:</p>
<ul>
<li>Running <code>.\tmp.ps1</code> in powershell</li>
<li>Running <code>python3 foo.py</code></li>
<li>Running <code>pytest foo.py -rP -v</code> when the line that sources <code>Launch-VsDevShell.ps1</code> is commented out</li>
</ul>
<p>However, if I run <code>pytest foo.py -rP -v</code> with the script-sourcing line included, stdout shows the <em>exact same</em> successful output (including Powershell version), but <em>I also see errors in stderr</em>. No exception is thrown and the test passes anyway, but that's really weird, and when real errors occur I don't know which messages to trust...!</p>
<p>I guess in <em>some</em> cases I could just run pytest within a setup power shell, but that wouldn't work if I invoke other compilers within the same <code>pytest</code> setup (e.g. testing with multiple versions of Visual Studio).</p>
<p>Any explanation why this happens with <code>pytest</code> specifically? How can I fix this?</p>
<pre class="lang-py prettyprint-override"><code># foo.py
import subprocess
import os
def test_run() -> None:
print(subprocess.run)
os.environ["COMSPEC"] = (
"C:\\Windows\\SysWOW64\\WindowsPowerShell\\v1.0\\powershell.exe"
)
subprocess.run([".\\tmp.ps1"], shell=True, check=True)
# Uncomment when running directly with python.
# test_run()
</code></pre>
<pre class="lang-none prettyprint-override"><code># tmp.ps1
Join-Path $PSHOME powershell.exe
$vsInstallPath = & "${env:ProgramFiles(x86)}/Microsoft Visual Studio/Installer/vswhere.exe" -property installationpath
echo "VS install path: $vsInstallPath"
. "$vsInstallPath\Common7\Tools\Launch-VsDevShell.ps1" -arch amd64 -SkipAutomaticLocation -VsInstallationPath "$vsInstallPath"
echo 'hurray'
</code></pre>
<p>And here's the output of <code>pytest foo.py -rP -v</code>:</p>
<pre><code>======================================================== test session starts ========================================================
platform win32 -- Python 3.12.10, pytest-8.4.0, pluggy-1.6.0 -- C:\Users\me\Git\foo\.venv\Scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Users\me\Desktop
collected 1 item
tmp.py::test_run PASSED [100%]
============================================================== PASSES ===============================================================
_____________________________________________________________ test_run ______________________________________________________________
------------------------------------------------------- Captured stdout call --------------------------------------------------------
<function run at 0x000001FABB0FD440>
C:\Windows\SysWOW64\WindowsPowerShell\v1.0\powershell.exe
VS install path: C:\Program Files\Microsoft Visual Studio\2022\Community
**********************************************************************
** Visual Studio 2022 Developer PowerShell v17.0
** Copyright (c) 2022 Microsoft Corporation
**********************************************************************
hurray
------------------------------------------------------- Captured stderr call --------------------------------------------------------
vswhere.exe : The term 'vswhere.exe' is not recognized as the name of a cmdlet, function, script file, or operable
program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ vswhere.exe -property catalog_productSemanticVersion -path C:\Program ...
+ ~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (vswhere.exe:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
Get-ChildItem : A positional parameter cannot be found that accepts argument 'Visual'.
At line:1 char:1
+ dir C:\Program Files\Microsoft Visual Studio\2022\Community\Common7\T ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-ChildItem], ParameterBindingException
+ FullyQualifiedErrorId : PositionalParameterNotFound,Microsoft.PowerShell.Commands.GetChildItemCommand
======================================================== 1 passed in 12.82s =========================================================
</code></pre>
|
<python><powershell><visual-studio><pytest>
|
2025-10-11 22:11:38
| 0
| 1,412
|
adentinger
|
79,788,168
| 552,683
|
How to efficiently denormalize a SQL DB to produce Parquet files
|
<p>I'm trying to create a parquet file from a heavily normalized SQL database with a snowflake schema. Some of the dimensions have very long text attributes so that a simply running a big set of joins to denormalize the data that's fed to PyArrow takes a while even with a smallish dataset (10s of thousands of facts), and produces a .parquet file about 10 times larger than the original Sqlite file. E.g.:</p>
<pre><code>import pandas as pd
import pyarrow as pa
...
qry = """
SELECT *
FROM dim1
JOIN dim2 using (d1_id)
JOIN dim3 using (d1_id)
JOIN dim4 using (d2_id)
JOIN facts using (d1_id, d2_id, d3_id, d4_id)
"""
df = pd.read_sql_query(qry, conn)
t = pa.Table.from_pandas(df)
</code></pre>
<p>As a workaround, I have resorted to reading each database table separately, creating a corresponding <code>pyarrow.Table</code>, then converting columns for the large attributes into dictionary types, and only then joining everything them all with the facts table.</p>
<pre><code>d1 = pd.read_sql_query("SELECT * FROM dim1", conn);
d1 = d1.set_column(4, "long_attr", d1.column('long_attr').dictionary_encode())
...
t = d1.join(d2, keys=["d1_id"]).join(d3, keys=["d1_id"]), ...)
t
</code></pre>
<p>This works well, but is rather tedious, as I convert every text column in the dimensions tables one at a time. Is there a better way to do this? It seems like a common data management task, so I'd expect a more general, or at least standard, solution to exist.</p>
|
<python><pandas><parquet><pyarrow>
|
2025-10-11 18:19:44
| 0
| 1,140
|
Davor Cubranic
|
79,788,108
| 4,518,341
|
How do I calculate a relative time delta in Pandas?
|
<p>I have a column of datetimes and I want to get the difference between values in terms of years, months, etc, instead of timedeltas that only provide days. How do I do this in Pandas?</p>
<p>Pandas provides <a href="https://pandas.pydata.org/docs/reference/api/pandas.tseries.offsets.DateOffset.html" rel="nofollow noreferrer">DateOffset</a> for relative deltas, but the docs say "the positional argument form of <a href="https://dateutil.readthedocs.io/en/stable/relativedelta.html" rel="nofollow noreferrer">relativedelta</a> is not supported", and that's the form that calculates a relative delta (as opposed to <em>specifying</em> a relative delta).</p>
<p>For this example, I'm only dealing with the min and max of the column to get the span, but I eventually want to apply this to the whole column.</p>
<pre><code>min_max = df_most_watched['time'].agg(['min', 'max'])
</code></pre>
<pre class="lang-none prettyprint-override"><code>min 2019-06-18 18:22:05.991000+00:00
max 2021-02-15 18:03:02.893000+00:00
Name: time, dtype: datetime64[ns, UTC]
</code></pre>
<p><code>min_max.diff()</code>:</p>
<pre class="lang-none prettyprint-override"><code>min NaT
max 607 days 23:40:56.902000
Name: time, dtype: timedelta64[ns]
</code></pre>
<p>The output should be 1 year, 7 months, 27 days, 23:40:56.902000.</p>
<h2>Attempted</h2>
<p>Just to confirm, I tried <code>pd.DateOffset(low, high)</code> and got <code>TypeError: `n` argument must be an integer, got <class 'pandas._libs.tslibs.timestamps.Timestamp'></code></p>
<p>I tried <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.pct_change.html" rel="nofollow noreferrer"><code>.pct_change()</code></a> on a whim hoping it would have a special case for datetimes, but no dice. <code>TypeError: cannot perform __truediv__ with this index type: DatetimeArray</code></p>
<p>I checked if <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.diff.html" rel="nofollow noreferrer"><code>.diff()</code></a> had some sort of setting like <code>relative=True</code>, but no.</p>
<h2>Research</h2>
<p>In the User Guide, the <a href="https://pandas.pydata.org/docs/user_guide/timeseries.html" rel="nofollow noreferrer">Time series</a> page doesn't have anything relevant when I Ctrl+F for "relative" and the <a href="https://pandas.pydata.org/docs/user_guide/timedeltas.html" rel="nofollow noreferrer">Time deltas</a> page doesn't mention "relative" at all.</p>
<p>I checked if <a href="https://pandas.pydata.org/docs/reference/api/pandas.tseries.offsets.DateOffset.html" rel="nofollow noreferrer">DateOffset</a> might have any alternate constructors that could take two timestamps, but the docs don't mention any methods starting with <code>from</code> or anything else.</p>
<h2>Setup</h2>
<pre><code>min_max = pd.Series(
{'min': pd.Timestamp('2019-06-18 18:22:05.991', tz='UTC'),
'max': pd.Timestamp('2021-02-15 18:03:02.893', tz='UTC')},
name='time')
</code></pre>
|
<python><pandas><datetime>
|
2025-10-11 15:59:56
| 2
| 33,775
|
wjandrea
|
79,787,931
| 8,522,463
|
Removing gridlines from scanned forms
|
<p>I am trying to remove gridlines from a form snippet as a preprocess step for OCR.</p>
<p>However, this turned out quite challenging when texts overlap with the gridlines.</p>
<p>Sample Images<br />
<img src="https://i.sstatic.net/nSV9S1hP.png" alt="Sample images" /></p>
<p>I am very new to open cv and this is What I tried so far-</p>
<p>Used the sample opencv square detection, this fails when texts overlap with gridlines.</p>
<pre><code>import cv2
import numpy as np
def angle(p1, p2, p0):
dx1, dy1 = p1[0] - p0[0], p1[1] - p0[1]
dx2, dy2 = p2[0] - p0[0], p2[1] - p0[1]
return (dx1 * dx2 + dy1 * dy2) / np.sqrt(
(dx1 * dx1 + dy1 * dy1) * (dx2 * dx2 + dy2 * dy2) + 1e-10
)
def findSquares(image, min_area=150, min_side=15, aspect_tol=0.25, angle_tol=0.4):
squares = []
img = cv2.pyrUp(cv2.pyrDown(image), dstsize=(image.shape[1], image.shape[0]))
for l in range(11):
if l == 0:
g = cv2.Canny(img, 255, 255)
g = cv2.dilate(g, None)
g = cv2.morphologyEx(
g, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
)
else:
_, g = cv2.threshold(img, int((l + 1) * 255 / 11), 255, cv2.THRESH_BINARY)
g = cv2.morphologyEx(
g, cv2.MORPH_CLOSE, cv2.getStructuringElement(cv2.MORPH_RECT, (3, 3))
)
for cnt in cv2.findContours(g, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]:
if cv2.contourArea(cnt) < min_area:
continue
hull = cv2.convexHull(cnt)
approx = cv2.approxPolyDP(hull, 0.02 * cv2.arcLength(hull, True), True)
if len(approx) != 4 or not cv2.isContourConvex(approx):
continue
pts = approx.reshape(4, 2)
edges = [np.linalg.norm(pts[i] - pts[(i + 1) % 4]) for i in range(4)]
if min(edges) < min_side:
continue
w, h = max(edges), min(edges)
if h / w < 1 - aspect_tol:
continue
if (
max(
[
abs(angle(pts[(i + 2) % 4], pts[i], pts[(i + 1) % 4]))
for i in range(4)
]
)
> angle_tol
):
continue
squares.append(approx)
if squares:
m = np.median([cv2.contourArea(s) for s in squares])
squares = [s for s in squares if 0.5 * m <= cv2.contourArea(s) <= 1.5 * m]
squares.sort(
key=lambda s: (np.mean([p[0][1] for p in s]), np.mean([p[0][0] for p in s]))
)
return squares
if __name__ == "__main__":
img = cv2.imread("t7.png", cv2.IMREAD_GRAYSCALE)
squares = findSquares(img)
img_color = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
cv2.polylines(img_color, squares, True, (0, 255, 0), 3, cv2.LINE_AA)
cv2.imshow("Squares", img_color)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>Would really appreciate some help here.
Extended set of test images
<a href="https://drive.google.com/drive/folders/17iLIpcKin13tDvMC9DJN7Ja034JFBwFQ?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/17iLIpcKin13tDvMC9DJN7Ja034JFBwFQ?usp=sharing</a></p>
|
<python><opencv><image-processing><ocr><scikit-image>
|
2025-10-11 09:20:19
| 0
| 2,158
|
Julkar9
|
79,787,848
| 1,581,090
|
How to make sure I set the correct Python interpreter in PyCharm as in the terminal?
|
<p>I open a PyCharm project (Windows 11) and inside PyCharm I run the terminal to install some requirements:</p>
<pre><code>pip install -r requirements.txt
</code></pre>
<p>However, when opening a Python file from the same project in the same folder all the modules that should be imported are marked in "red", i.e. "unresolved reference".</p>
<p>It seems that the terminal in PyCharm and PyCharm use a different Python interpreter. Running this on Windows I check what Python is used in the terminal and do:</p>
<pre><code>where python
</code></pre>
<p>but I do not get any output.</p>
<p>How to set this up correctly, so that when I install something on the terminal inside PyCharm, PyCharm actually uses that?</p>
<pre><code>python -V
Python 3.10.11
</code></pre>
|
<python><windows><pycharm>
|
2025-10-11 06:25:38
| 1
| 45,023
|
Alex
|
79,787,709
| 5,269,892
|
Excel incorrectly processes formulas cross-referencing each other
|
<p>I have generated an Excel file, with the following columns (cf. minimal code below):</p>
<ul>
<li><strong>ID (col. A):</strong> set via hardcoded values (i.e. not via Excel formula)</li>
<li><strong>Manually-set resolved (col. B):</strong> a column that should either be left blank or set <code>TRUE</code> by the user</li>
<li><strong>Resolved (col. D)</strong>: a column indicating whether "something" (represented by the row) has been "resolved"; it should be <code>TRUE</code> iff:
<ul>
<li><em>Manually-set resolved</em> is <code>TRUE</code></li>
<li>and/or <em>Any in ID resolved</em> (see directly below) is <code>TRUE</code></li>
</ul>
</li>
<li><strong>Any in ID resolved (col. C)</strong>: a column indicating whether any of the rows sharing the <em>ID</em> value have <em>Resolved</em> equal <code>TRUE</code></li>
</ul>
<p>Formulas are <code>OR</code>-based for column <em>Resolved</em>, <code>MAXIFS</code>-based for column <em>Any in ID resolved</em> (cf. minimal code below).</p>
<p>Example formulas (note: German locale for installed Excel - does not matter; English formula names are recognized):</p>
<ul>
<li><em>Resolved</em> cell: <code>D2=ODER(B2=WAHR;C2=WAHR)</code></li>
<li><em>Any in ID resolved</em> cell: <code>C2=@MAXIFS($D$2:$D$11;$A$2:$A$11;A2)</code></li>
</ul>
<p><strong>Issue:</strong></p>
<p>I would expect that at the beginning both the columns <em>Resolved</em> and <em>Any in ID resolved</em> are all <code>FALSE</code> and that values are auto-set by the formulas to <code>TRUE</code> as soon as the user sets the column <em>Manually-set resolved</em> to <code>TRUE</code>.</p>
<p>Unfortunately, that is not what happens - rather, when opening the generated Excel file, the cells all display <code>#NAME?</code> (cf. screenshot below), as if there was an issue with the formulas - but I cannot figure out what it is. And sometimes, when fiddling around with the generated Excel, 0 is displayed (which can be seen as representing <code>FALSE</code>) - but then column <em>Resolved</em> does not auto-update when <em>Manually-set resolved</em> is set to <code>TRUE</code>.</p>
<p><a href="https://i.sstatic.net/mdMasYeD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdMasYeD.png" alt="Screenshot of issue" /></a></p>
<p><strong>Notes:</strong></p>
<ul>
<li>Excel 365 is used - version 2502, so <code>MAXIFS</code> can be used (is available)</li>
<li>Excel also warns about circularity - and yes, the columns <em>Resolved</em> and <em>Any in ID resolved</em> refer to each other - but it should NOT be an issue, since <em>Manually-set resolved</em> should break any such circularity issues via first setting <em>Resolved</em> to <code>TRUE</code> and based on that <em>Any in ID resolved</em> to <code>TRUE</code>.</li>
</ul>
<p><strong>Questions:</strong></p>
<ul>
<li>Why do the <em>Resolved</em> and <em>Any in ID resolved</em> columns display <code>#NAME?</code> rather than <code>FALSE</code>?</li>
<li>Why does the <em>Resolved</em> column not auto-update when the <em>Manually-set resolved</em> column is set to <code>TRUE</code>?</li>
</ul>
<p><strong>Minimal example code to generate a test Excel file:</strong></p>
<pre><code>from openpyxl import Workbook
wb = Workbook()
ws = wb.active
ws.title = "Example"
# Column headers
ws['A1'] = "ID"
ws['B1'] = "MANUALLY_SET_RESOLVED"
ws['C1'] = "ANY_IN_ID_RESOLVED"
ws['D1'] = "RESOLVED"
num_rows = 10
# Populate example data for ID (e.g. 3 groups: 1,2,3 repeated)
for row in range(2, num_rows + 2):
ws[f"A{row}"] = (row - 2) % 3 + 1 # Cycle through 1,2,3
# MANUALLY_SET_RESOLVED starts empty (FALSE/blank)
ws[f"B{row}"] = None
# ANY_IN_ID_RESOLVED: MAXIFS over RESOLVED column for group
ws[f"C{row}"].value = f'=MAXIFS($D$2:$D${num_rows + 1},$A$2:$A${num_rows + 1},A{row})'
# RESOLVED: OR of MANUALLY_SET_RESOLVED and ANY_IN_ID_RESOLVED
ws[f"D{row}"].value = f'=OR(B{row}=TRUE,C{row}=TRUE)'
wb.save("example_file.xlsx")
</code></pre>
<hr />
<p><strong>Update for @MGonet:</strong></p>
<p>The MAXIFS appears to have issues somehow even with numbers rather than booleans - see this screenshot:</p>
<p><a href="https://i.sstatic.net/Yjm4gCkx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yjm4gCkx.png" alt="enter image description here" /></a></p>
<p>But when using MAXWENNS instead, the formula returns a result - not the correct one though, since it will always return 0 for any booleans (even <code>TRUE</code>). This indicates that there might be a localization issue, and that booleans might need to be converted to numbers via <code>--</code> inside the formula:</p>
<p><a href="https://i.sstatic.net/ZLhaQu2m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZLhaQu2m.png" alt="enter image description here" /></a></p>
<p>Also, in such a case, without overwriting the <em>Resolved</em> column manually with booleans, <code>MAXWENNS</code> returns 0, Excel displays circularity warnings and also visualizes that via a blue-red double arrow (cf. screenshot), and setting <em>Manually-set resolved</em> to <code>TRUE</code> does NOT update <em>Resolved</em> - which it should, however.</p>
<p><a href="https://i.sstatic.net/53PBSTdH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53PBSTdH.png" alt="enter image description here" /></a></p>
<p>Then, only when double-clicking into a <em>Resolved</em> cell (but I want auto-re-calculation) and pressing Enter, is the value set - to 0 - which is incorrect, since it should be <code>TRUE</code>. I do actually not understand how <code>=OR(TRUE;0)</code> returns 0 rather than <code>TRUE</code> - that makes no sense to me. Apart from that returning 0 for <code>=NOT(0)</code> is also plain wrong:</p>
<p><a href="https://i.sstatic.net/6kJRycBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6kJRycBM.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/65qSy51B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65qSy51B.png" alt="enter image description here" /></a></p>
|
<python><excel><excel-formula><openpyxl><maxifs>
|
2025-10-10 21:58:37
| 2
| 1,314
|
silence_of_the_lambdas
|
79,787,610
| 179,583
|
Find first matching array index in Python, but using lambda helper function
|
<p>I have a data structure like</p>
<pre class="lang-py prettyprint-override"><code>arr = [
(2014, 0.21, 0.27),
(2015, 0.23, 0.27),
(2016, 0.25, None),
(2017, 0.25, None),
(2018, 0.27, None),
]
</code></pre>
<p>where I want to find the index of the first tuple that does not have data in its third slot. If JavaScript for example I would use <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/findIndex" rel="nofollow noreferrer">Array.prototype.findIndex</a> and pass a helper function to determine the match condition.</p>
<p>But in Python it looks like the prime index-finding candidate, the <a href="https://docs.python.org/3/library/stdtypes.html#sequence.index" rel="nofollow noreferrer">sequence.index</a> method, does <strong>not</strong> support passing a lambda, so I can't just do:</p>
<pre><code># ValueError: <function <lambda> at 0x…> is not in list
i = arr.index(lambda d: d[2] is None)
</code></pre>
<p>Am I missing a built-in way to find such an index using a helper function?</p>
|
<python><list>
|
2025-10-10 18:14:04
| 0
| 18,149
|
natevw
|
79,787,552
| 6,101,024
|
Relative Import With Parent Package
|
<p>src</p>
<ul>
<li><p>testing</p>
<ul>
<li>test.py</li>
</ul>
</li>
<li><p>utils_dataset</p>
<ul>
<li>px_chol.py</li>
<li>another_file.py</li>
</ul>
</li>
<li><p>test.py ---> is calling a class declared in px_chol.py. therefore, it has
<code>from utils_dataset.px_chol import CLASS_NAME</code> declaration</p>
</li>
</ul>
<p>When executing test.py I got the following Error:</p>
<pre><code> from utils_dataset.px_chol import CLASS_NAME
ModuleNotFoundError: No module named 'utils_dataset'
</code></pre>
<p>Any idea?</p>
<p>I tried:</p>
<ol>
<li><code>pip install -e .</code></li>
<li><code>export PYTHONPATH="${PYTHONPATH}:/home/CORP/my_name/ProjectDevelopment/ProjectRoot"</code></li>
<li>run the code with <code>python src/testing/test.py</code> from the ProjectRoot folder</li>
<li>run the code with <code>python testing/test.py</code> from the ProjectRoot/src/ folder</li>
</ol>
|
<python><package>
|
2025-10-10 16:49:49
| 3
| 697
|
Carlo Allocca
|
79,787,529
| 8,800,836
|
Subtlety in initializing attributes with methods in modules from the `equinox` `jax` library
|
<p>I have the following code that defines an abstract class and its final subclasse. The two classes are both subclasses of the <a href="https://docs.kidger.site/equinox/api/module/module/" rel="nofollow noreferrer"><code>equinox.Module</code></a> class, which registers class attributes as the leaves of a PyTree container.</p>
<pre class="lang-py prettyprint-override"><code># === IMPORTS ===
from abc import ABC, abstractmethod
import jax
from jax.typing import ArrayLike
import jax.numpy as jnp
import equinox as eqx
from quadax import quadgk
jax.config.update("jax_enable_x64", True)
class MyClass(eqx.Module): # Works if I toggle to MyClass(ABC)
rtol = 1e-12
atol = 1e-12
param: ArrayLike
def __init__(self):
self.param = self._integral_moment(3) # Fails, but works if I toggle to something like "self.param = self.func(1.)"
@abstractmethod
def func(self, tau):
pass
def func_abs(self, tau):
return jnp.abs(self.func(tau))
def _integral_moment(self, order):
return quadgk(self._integrand_moment, [0, jnp.inf], args=(order,), epsrel=self.rtol, epsabs=self.atol)[0]
def _integrand_moment(self, tau, order):
return self.func_abs(tau) * jnp.abs(tau)**order
class MySubClass(MyClass):
gamma: ArrayLike
kappa: ArrayLike
w0: ArrayLike
def __init__(self, gamma, kappa, w0):
self.gamma = jnp.asarray(gamma)
self.kappa = jnp.asarray(kappa)
self.w0 = jnp.asarray(w0)
super().__init__()
def func(self, tau):
return self.gamma * jnp.exp(-1j * self.w0 * tau) * jnp.exp(-self.kappa*jnp.abs(tau)/2)
# Test
test = MySubClass(gamma=1., kappa=1., w0=1.)
test.param
</code></pre>
<p>This code produces the <code>AttributeError</code> message:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[21], line 52
48 return self.gamma * jnp.exp(-1j * self.w0 * tau) * jnp.exp(-self.kappa*jnp.abs(tau)/2)
51 # Test
---> 52 test = MySubClass(gamma=1., kappa=1., w0=1.)
53 test.param
[... skipping hidden 2 frame]
Cell In[21], line 45
43 self.kappa = jnp.asarray(kappa)
44 self.w0 = jnp.asarray(w0)
---> 45 super().__init__()
Cell In[21], line 19
18 def __init__(self):
---> 19 self.param = self._integral_moment(3)
[... skipping hidden 1 frame]
Cell In[21], line 29
28 def _integral_moment(self, order):
---> 29 return quadgk(self._integrand_moment, [0, jnp.inf], args=(order,), epsrel=self.rtol, epsabs=self.atol)[0]
...
659 and isinstance(out, types.MethodType)
660 and out.__self__ is self
661 ):
AttributeError: 'MySubClass' object has no attribute 'param'
</code></pre>
<p>This error clearly comes from a restriction of the <code>equinox.Module</code>, since if I change the parent class to <code>ABC</code>, the code runs fine.</p>
<p>First, I thought that maybe <code>equinox</code> did not allow me to use methods to initialize attributes. But if I use the <code>func()</code> method instead of the <code>_integral_moment()</code> method to initialize <code>param</code>, the code works fine.</p>
<p>So I just don't understand what is going on here. I thought it would be better to ask here before asking the developers at <code>equinox</code>.</p>
<p>This uses <code>equinox</code> version 0.13.1 with <code>jax</code> version 0.7.2.</p>
|
<python><class><constructor><jax>
|
2025-10-10 16:18:47
| 2
| 539
|
Ben
|
79,787,507
| 1,060,339
|
Changing the owner of .venv created by uv inside Docker
|
<p>I have a Django app build by uv running inside Docker. I mount the local filesystem as a volume in the container using Docker Compose so that edits to the source code locally trigger reloading of the app in the container. It <em>almost</em> works.</p>
<p>The issue is that the <code>.venv</code> directory built by uv is owned by the root user of the Docker container. This means that I cannot edit those files from my local filesystem without root access.</p>
<p>I have gotten around this with pip/pipenv/poetry/pdm in the past by installing the venv as a non-root user who has the same uid and guid as my local user (those values are passed into Docker via a <code>.env</code> file). But I can't work out how to do that for uv.</p>
<p><code>Dockerfile</code>:</p>
<pre class="lang-none prettyprint-override"><code>FROM python:3.12-slim-trixie
# create non-root user
RUN addgroup --system appuser && adduser --system --group --home /home/appuser appuser
# set work directory
WORKDIR /app
# environment variables
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
UV_LINK_MODE=copy \
UV_PYTHON_DOWNLOADS=never \
UV_PROJECT_ENVIRONMENT=$APP_HOME/.venv
# install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
# install system dependencies
RUN apt-get update
RUN apt-get install -y --no-install-recommends \
build-essential netcat-traditional \
python-is-python3 python3-gdal python3-psycopg2
# switch to app user [THIS MAKES THE NEXT COMMAND FAIL]
# USER appuser
# synchronise project dependencies
COPY pyproject.toml uv.lock ./
RUN --mount=type=cache,target=/root/.cache \
uv sync --all-groups --frozen --no-install-project
# run entrypoint script
COPY ./entrypoint.sh .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
</code></pre>
<p><code>docker-compose.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>services:
server:
build:
context: .
command: uv run manage.py runserver 0.0.0.0:8000
tty: true
environment:
DJANGO_SETTINGS_MODULE: config.settings
volumes:
- ./:/app/
ports:
- "8000:8000"
env_file:
- .env
</code></pre>
<p><code>entrypoint.sh</code>:</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/sh
set -euo pipefail
cd /app
# ensure "app" user in the container has same ids as local user outside the container
if [ ! -z ${RUN_AS_UID} ]; then usermod --uid $RUN_AS_UID appuser; fi
if [ ! -z ${RUN_AS_GID} ]; then groupmod --gid $RUN_AS_GID apuser; fi
# setup django
uv run ./manage.py migrate
uv run ./manage.py collectstatic --no-input --link
# run whatever command was passed to the container (from docker-compose)
exec "$@"
</code></pre>
<p><code>.env</code>:</p>
<pre class="lang-bash prettyprint-override"><code>RUN_AS_UID=1001
RUN_AS_GID=1001
</code></pre>
|
<python><django><docker><docker-compose><uv>
|
2025-10-10 15:47:42
| 1
| 4,586
|
trubliphone
|
79,787,097
| 17,580,381
|
Type hints for concurrent.futures.Executor subclasses
|
<p>I have the following code developed in VSCode that runs without error:</p>
<pre><code>import random
import time
from concurrent.futures import (
InterpreterPoolExecutor,
ProcessPoolExecutor,
ThreadPoolExecutor,
Executor,
)
from os import process_cpu_count
def cpus() -> int:
"""
Get the number of available CPUs minus one but with a minimum of 2
"""
ncpus = process_cpu_count() or 2
return ncpus - 1 if ncpus > 3 else 2
def func(_: int) -> list[int]:
return [random.randint(1, 10) for _ in range(10_000)]
def process(pool: Executor) -> None:
start = time.perf_counter()
with pool() as e:
for _ in e.map(func, range(cpus())):
pass
duration = time.perf_counter() - start
print(pool.__name__, f"{duration=:.4f}s")
if __name__ == "__main__":
for pool_executor in (
InterpreterPoolExecutor,
ProcessPoolExecutor,
ThreadPoolExecutor,
):
process(pool_executor)
</code></pre>
<p>The issue I have is that <code>pylance</code> complains about the line:</p>
<pre><code>with pool() as e:
</code></pre>
<p>...indicating that "Object type of Executor is not callable"</p>
<p>Presumably this is because the type hint of <code>Executor</code> is wrong.</p>
<p>How do I overcome this?</p>
|
<python><pylance>
|
2025-10-10 07:38:28
| 1
| 28,997
|
Ramrab
|
79,787,051
| 2,123,706
|
python plotly scatter ols trendline has a kink in it
|
<p>I am using plotly express to model some data, and wanted to add a <code>trendline = 'ols'</code> to it.</p>
<p>when I do, I obtain a kink in the result</p>
<p><a href="https://i.sstatic.net/v8YNlpMo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8YNlpMo.png" alt="enter image description here" /></a></p>
<p>here is the code used:</p>
<pre><code>d={'category': {63: 'test', 128: 'test', 192: 'test', 256: 'test', 320: 'test', 385: 'test', 449: 'test', 513: 'test', 577: 'test', 641: 'test', 706: 'test', 770: 'test', 834: 'test', 898: 'test', 962: 'test', 1026: 'test', 1090: 'test', 1155: 'test', 1219: 'test', 1283: 'test', 1347: 'test', 1411: 'test', 1475: 'test', 1539: 'test', 1605: 'test', 1669: 'test', 1733: 'test', 1797: 'test', 1861: 'test', 1925: 'test', 1989: 'test', 2054: 'test', 2118: 'test', 2182: 'test'}, 'date': {63: '20250901', 128: '20250902', 192: '20250903', 256: '20250904', 320: '20250908', 385: '20250909', 449: '20250910', 513: '20250912', 577: '20250914', 641: '20250915', 706: '20250916', 770: '20250917', 834: '20250918', 898: '20250919', 962: '20250920', 1026: '20250921', 1090: '20250922', 1155: '20250923', 1219: '20250924', 1283: '20250925', 1347: '20250926', 1411: '20250927', 1475: '20250928', 1539: '20250929', 1605: '20250930', 1669: '20251001', 1733: '20251002', 1797: '20251003', 1861: '20251004', 1925: '20251005', 1989: '20251006', 2054: '20251007', 2118: '20251008', 2182: '20251009'}, 'sec': {63: 161.74, 128: 145.616, 192: 83.31, 256: 147.867, 320: -0.0, 385: -0.0, 449: -0.0, 513: -0.0, 577: -0.0, 641: -0.0, 706: -0.0, 770: -0.0, 834: -0.0, 898: -0.0, 962: -0.0, 1026: -0.0, 1090: -0.0, 1155: 1198.963, 1219:213.096, 1283: 194.723, 1347: 278.154, 1411: 361.6, 1475: 970.48, 1539: 268.713, 1605: 267.276, 1669: 524.43, 1733: 2162.903, 1797: 311.453, 1861: 346.083, 1925: 801.653, 1989: 284.736, 2054: 329.89, 2118: 296.176, 2182: 271.403}}
df=pd.DataFrame(d)
fig = px.scatter(df, x='date', y='sec', color = 'category', title='Times', trendline='ols')
fig.show()
</code></pre>
<p>Why is there a kink in it? Is there a way to make it a single line? What am I missing?</p>
<p>TIA</p>
|
<python><pandas><plotly><trendline>
|
2025-10-10 06:30:11
| 1
| 3,810
|
frank
|
79,786,917
| 8,800,836
|
Why must classes be defined in order of appearance in python module?
|
<p>Let's say I have a python file <code>test.py</code> that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>class MyFirstClass():
prop=3
class MySecondClass():
prop1 = 0.
prop2 = MyFirstClass()
</code></pre>
<p>Then I can run this and it works:</p>
<pre class="lang-py prettyprint-override"><code>from my_test import MyFirstClass, MySecondClass
obj = MySecondClass()
</code></pre>
<p>But if I write my python file like this, just inverting the definition order:</p>
<pre class="lang-py prettyprint-override"><code>class MySecondClass():
prop1 = 0.
prop2 = MyFirstClass()
class MyFirstClass():
prop=3
</code></pre>
<p>then I get an error saying that <code>MyFirstClass</code> is not defined.</p>
<p>So Python obviously expect <code>MyFirstClass</code> to be defined before it is used in <code>MySecondClass</code>.</p>
<p>But I'm just wondering why it is like that. After all, function definitions with <code>def</code> can be made in any order within a module. So I'm (perhaps naively) a bit surprised that class definitions must be made in order like this.</p>
<p>I suspect that this has something to do with the fact that I used <code>MyFirstClass</code> to initialize a class-level attribute of <code>MySecondClass</code>. But could someone clarify what is going on here?</p>
|
<python><class><attributes>
|
2025-10-10 00:04:29
| 4
| 539
|
Ben
|
79,786,710
| 6,829,655
|
Memory leak in aioboto3 (aiohttp) after upgrading to Python 3.13
|
<p>I’m seeing a steady memory increase in my ECS containers after upgrading Python and dependency versions.
The memory growth stops when I stop incoming gRPC requests, but it never drops back down afterward — suggesting some retained references or unclosed aiohttp sessions.</p>
<p><strong>Setup</strong></p>
<p>I have a global AWS session and a singleton pattern for DynamoDB resource creation:</p>
<pre><code>aws_session = aioboto3.Session()
class DynamoDBClient:
def __init__(self, dynamodb_config):
self.ddb_config = dynamodb_config
self.ddb_resource = None
async def get_resource(self):
if self.ddb_resource is None:
self.ddb_resource = await aws_session.resource(
'dynamodb',
config=self.ddb_config
).__aenter__()
return self.ddb_resource
async def close(self):
if self.ddb_resource:
await self.ddb_resource.__aexit__(None, None, None)
self.ddb_resource = None
</code></pre>
<p>The app always works with the same connection pool object, so a new connection is not opened for every request.
<strong>aexit</strong> is called cleanly when the container shuts down.</p>
<p><strong>Versions</strong></p>
<p>After upgrading from Python 3.11 → 3.13 and bumping dependencies, the leak started appearing:</p>
<pre><code>aioboto3==15.1.0
aiologger==0.7.0
grpcio==1.75.1
protobuf==6.32.1
</code></pre>
<p><strong>Observed behavior</strong></p>
<p>Using tracemalloc, I compared memory snapshots and noticed repeated growth at this location:</p>
<pre><code>/usr/local/lib/python3.13/site-packages/aiohttp/client_proto.py:315:
size=78.7 MiB (+18.1 MiB), count=1,288,026 (+296,583), average=64 B
</code></pre>
<p>It looks like aiohttp (used internally by aioboto3) is accumulating small objects over time, suggesting either:</p>
<p>unclosed aiohttp.ClientSession objects, or</p>
<p>something holding references to aiohttp transport buffers.</p>
<p><strong>What I’ve tried</strong></p>
<ul>
<li>Verified that <strong>aexit</strong> is called on shutdown.</li>
<li>Confirmed that get_resource() reuses the same resource and doesn’t reinitialize it per request.</li>
<li>No other aiohttp usage in the codebase (only through aioboto3).</li>
</ul>
<p><strong>Question</strong></p>
<p>Has anyone experienced memory leaks with aioboto3 or aiohttp after upgrading to Python 3.13?
Could this be related to internal session management in the newer aiohttp version bundled with aioboto3 15.1.0, or something that changed in Python’s async memory handling?</p>
<p>Any tips on how to trace or mitigate aiohttp memory growth through aioboto3 would be appreciated.</p>
|
<python><boto3><aiohttp><python-3.13><aiobotocore>
|
2025-10-09 17:48:51
| 0
| 651
|
datahack
|
79,786,435
| 1,309,443
|
attrs subclass from non-attrs class not respecting initializer arguments
|
<p>I get the following error:</p>
<pre><code>$ python attrs_subclass_test.py
Traceback (most recent call last):
File "attrs_subclass_test.py", line 23, in <module>
attrs_sub = AttrsSub(name='attrs_sub')
TypeError: __init__() got an unexpected keyword argument 'name'
</code></pre>
<p>when running the code below:</p>
<pre><code>import attrs
class NonAttrsBase:
def __init__(self, name=None):
self.name = name
class NonAttrsSub(NonAttrsBase):
pass
@attrs.define
class AttrsSub(NonAttrsBase):
pass
# def __attrs_pre_init__(self, name, *args, **kwargs):
# super().__init__(name=name, *args, **kwargs)
#
# def __attrs_post_init__(self, name):
# super().__init__(name=name)
if __name__ == '__main__':
non_attrs_base = NonAttrsBase(name='non_attrs_base')
non_attrs_sub = NonAttrsSub(name='non_attrs_sub')
attrs_sub = AttrsSub(name='attrs_sub')
</code></pre>
<p>Versions:</p>
<pre><code>Python 3.7.3
attrs 24.2.0
</code></pre>
<p>How do I get <code>attrs</code> to honor <code>__init__()</code> arguments of non-attrs base classes?</p>
<p>Google search returns this AI response, but the examples do not work:</p>
<blockquote>
<p>When an attrs class subclasses a non-attrs class, the attrs library
generates an <code>__init__</code> method for the subclass based on its own
<code>attr.ib()</code> definitions. This generated <code>__init__</code> method does not
automatically call <code>super().__init__()</code> with the appropriate arguments
for the non-attrs base class. This means if the non-attrs base class
has its own <code>__init__</code> method that expects arguments, the attrs
subclass's generated <code>__init__</code> will not pass those arguments, leading
to a TypeError or other unexpected behavior because the base class's
initialization is not performed correctly. To resolve this, you must
explicitly call <code>super().__init__()</code> within a custom <code>__init__</code> or
<code>__attrs_post_init__</code> method in your attrs subclass. Here's how you can do it:</p>
<ol>
<li><p>Custom <code>__init__</code> in the attrs subclass:</p>
<pre class="lang-py prettyprint-override"><code>import attrs
class NonAttrsBase:
def __init__(self, base_arg):
self.base_arg = base_arg
print(f"NonAttrsBase initialized with: {base_arg}")
@attrs.define
class AttrsSubclass(NonAttrsBase):
sub_arg: str = attrs.field()
def __init__(self, base_arg, sub_arg):
super().__init__(base_arg) # Explicitly call base class's __init__
self.sub_arg = sub_arg # Initialize attrs field manually or let attrs handle it if auto_attribs is used
# If using auto_attribs=True in @attrs.define, attrs will handle self.sub_arg = sub_arg automatically.
# However, if you have a custom __init__, you might need to handle it yourself or use __attrs_post_init__
# for attrs-specific initialization.
# Example usage
instance = AttrsSubclass(base_arg="from base", sub_arg="from subclass")
print(f"AttrsSubclass instance: {instance.base_arg}, {instance.sub_arg}")
</code></pre>
</li>
<li><p>Using <code>__attrs_post_init__</code> (recommended for cleaner separation): This approach leverages attrs's generated <code>__init__</code> for its own fields
and uses <code>__attrs_post_init__</code> to handle the base class initialization.</p>
<pre class="lang-py prettyprint-override"><code>import attrs
class NonAttrsBase:
def __init__(self, base_arg):
self.base_arg = base_arg
print(f"NonAttrsBase initialized with: {base_arg}")
@attrs.define
class AttrsSubclass(NonAttrsBase):
sub_arg: str = attrs.field()
base_arg_for_attrs: str = attrs.field(init=False) # Optional: if you want to store base_arg as an attrs field
def __attrs_post_init__(self, base_arg):
super().__init__(base_arg)
# If you defined base_arg_for_attrs, you could set it here:
# self.base_arg_for_attrs = base_arg
# Example usage
instance = AttrsSubclass(sub_arg="from subclass", base_arg="from base")
print(f"AttrsSubclass instance: {instance.base_arg}, {instance.sub_arg}")
</code></pre>
</li>
</ol>
<p>Key takeaway: When subclassing a non-attrs class with an attrs class,
you are responsible for explicitly calling <code>super().__init__()</code> to
ensure the base class is properly initialized. You can do this either
in a custom <code>__init__</code> or, more idiomatically with attrs, within
<code>__attrs_post_init__</code>.</p>
</blockquote>
|
<python><python-attrs>
|
2025-10-09 12:50:05
| 1
| 771
|
BrendanSimon
|
79,786,399
| 6,288,756
|
Does this code indicate a mistake in Wikipedia's example on the "QR Code" page?
|
<p>I am doing some work with QR codes, and to test my code, I was comparing it against the example QR code at the top of <a href="https://en.wikipedia.org/wiki/QR_code" rel="nofollow noreferrer">the Wikipedia page</a> for QR codes:</p>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d0/QR_code_for_mobile_English_Wikipedia.svg/1024px-QR_code_for_mobile_English_Wikipedia.svg.png" width="250">
<p>While working on my code to attempt to re-generate this QR code and ensure I got the same output, I encountered a surprise. In the <a href="https://files.infocentre.io/files/docs_clients/126_2007850898_3334616_ISO_IEC_18004_2015.pdf" rel="nofollow noreferrer">QR standard</a> Section 4.15 says that remainder bits must be appended to the code data, and they must all be zeros. I believe the Wikipedia example is using ones.</p>
<p>My script (code at the bottom of this question) generates the following QR code when attempting to recover Wikipedia's QR code:</p>
<img src="https://i.sstatic.net/Qs6DUVfn.png" width="250">
<p>You will notice that almost all the pixels are identical. The exception is in the lower left. At the far left edge, immediately above the lower-left Finder target, my script produces a shape that looks like a left-facing pitchfork, or the number 3. The Wikipedia one does not. The discrepant pixels are exactly those pixels marked with the "X" in this other Wikipedia image, indicating them as the remainder:</p>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/7/77/QR_Ver3_Codeword_Ordering.svg/1280px-QR_Ver3_Codeword_Ordering.svg.png" width="250">
<p>Now, based on this, the Python QR code library could be wrong, or Wikipedia could be wrong. The QR Code Mask (which XORs the code with one of 8 predefined patterns) makes it hard to tell which code is right. We want to know which code is encoding zeros at those 7 pixels. To work that out, my script (again, code below) runs through all 8 possible Mask patterns, un-masks them, and shows the codes. They should all mostly match (proving that the unmasking was successful), and indeed they do, and they show pure white in that region:</p>
<img src="https://i.sstatic.net/AJn1n3Z8.png" width="250">
<p>And in a QR code, white is a "0" bit. This leads me to conclude that my generator is correctly using zeros for the remainder, and Wikipedia's generator did not.</p>
<p>Of course, these remainder bits are meaningless and should be ignored by any QR decoder. That's why this mistake has persisted for so many years on Wikipedia. But a standard is a standard, and the example QR code at the top of the page ought to be a fully standard-compliant one.</p>
<p>Am I correct in believing that I have demonstrated that Wikipedia has a mistake? Does my logic (and code) check out?</p>
<p>Code here, relies on <a href="https://pypi.org/project/qrcode/" rel="nofollow noreferrer"><code>qrcode</code></a>:</p>
<pre><code>import qrcode
import numpy as np
import matplotlib.pyplot as plt
payload = "http://en.m.wikipedia.org"
version = 3
QR_size = 17 + 4 * version
# Generate a list of the 8 QR codes with the 8 possible masks
all_8_codes = []
for mask_num in range(8):
code_maker = qrcode.QRCode(version = version,
error_correction = 3,
mask_pattern = mask_num)
code_maker.add_data(payload)
code_maker.make()
all_8_codes.append(np.array(code_maker.modules))
plt.title(f"Wikipedia's QR code (Almost)")
plt.imshow(all_8_codes[1], cmap = "Greys")
# Across the 8 codes, identify any pixels which are constant and should not get the mask
anti_mask = np.zeros([QR_size, QR_size])
for i in range(QR_size):
for j in range(QR_size):
px_cast = [arr[i][j] for arr in all_8_codes]
anti_mask[i][j] = all(x == px_cast[0] for x in px_cast)
# Iterate through the 8 codes, re-applying the XOR mask and therefore removing it
for mask_num, code in enumerate(all_8_codes):
# Using the library, get the lambda function that indicates whether a bit flips
mask_lambda = qrcode.util.mask_func(mask_num)
for i in range(QR_size):
for j in range(QR_size):
# Don't apply the mask to areas identified as anti-mask
if not anti_mask[i][j]:
code[i][j] ^= mask_lambda(i,j)
# Show what should be 8 nearly-identical codes with the masks removed
# This proves we successfully demasked the message.
plt.figure()
plt.title(f"Demasked {mask_num}")
plt.imshow(code, cmap = "Greys")
plt.show()
</code></pre>
<p>Edit: as a second point of reference, it appears this post has been linked on the Talk page for that Wikipedia article. Responses there, should they appear, could be insightful for this question. Link:
<a href="https://en.m.wikipedia.org/wiki/Talk:QR_code" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Talk:QR_code</a></p>
|
<python><qr-code>
|
2025-10-09 12:24:57
| 1
| 323
|
TJM
|
79,786,387
| 13,392,257
|
How to add aiokafka producer to FastAPI properly?
|
<p>I am creating FastAPI and aiokafka endpoint. How to add aiokafka producer properly?</p>
<pre><code>from aiokafka import AIOKafkaProducer. The function is woking, but too long. Found out that the problem deals with `producer` function argument (function works fast without producer).
@my_router.post("/{data_type}")
async def create_task(
data_type: str,
request: Request,
db: AsyncSession = Depends(get_db),
producer: AIOKafkaProducer = Depends(get_producer)
):
start_time = time.time()
return {"status": "OK"}
</code></pre>
<p>The producer code:</p>
<pre><code>import json
import ssl
import time
from functools import lru_cache
from fastapi import Depends
from aiokafka import AIOKafkaProducer
from aiokafka.helpers import create_ssl_context
from typing import AsyncGenerator
from app.config import settings
__all__ = ['startup_producer', 'shutdown_producer', 'get_producer']
def producer_serializer(message):
return json.dumps(message).encode('utf-8')
def get_ssl_context():
"""Get SSL context for consumer and producer.
Returns:
AIOKafkaConsumer: SSL context
"""
if settings.kafka_service.security_protocol == "SSL":
return create_ssl_context(...)
else:
return None
async def startup_producer():
start_time = time.time()
producer = AIOKafkaProducer(
# set settings
)
await producer.start()
elapsed_time = time.time() - start_time
print(f"TIME_PROCUCER_1: {elapsed_time:.4f} seconds")
return producer
async def shutdown_producer(producer: AIOKafkaProducer):
await producer.stop()
async def get_producer() -> AsyncGenerator[AIOKafkaProducer, None]:
"""Dependency to get the Kafka producer instance"""
producer = await startup_producer()
try:
yield producer
finally:
await shutdown_producer(producer)
</code></pre>
|
<python><fastapi><aiokafka>
|
2025-10-09 12:12:05
| 1
| 1,708
|
mascai
|
79,786,362
| 10,982,755
|
UV Docker Cache behaviour issues
|
<p>I have checked out <a href="https://docs.astral.sh/uv/guides/integration/docker/#caching" rel="nofollow noreferrer">Docker Caching</a> and the <a href="https://github.com/astral-sh/uv-docker-example/blob/main/Dockerfile" rel="nofollow noreferrer">uv docker example</a>. Both of them fail to clarify on the behaviour of the cache directory.</p>
<p>In the local system, I'm able to check and verify that the <code>UV_CACHE_DIR</code> contains many folders and files like <code>CACHEDIR.TAG</code>, <code>wheels</code>, <code>interpreter</code> etc.</p>
<p>The case is different when building a Docker image, though. Below is one of my <code>Dockerfile</code> for a service:</p>
<pre class="lang-none prettyprint-override"><code>FROM basexg:latest
ENV SERVICE_NAME=application
ENV UV_CACHE_DIR=/${SERVICE_NAME}/cache/uv
COPY /pyproject.toml /build/
COPY /uv.lock /build/
COPY /baselib /build/baselib
COPY /application /build/application
WORKDIR /build
# WITH CACHE
RUN --mount=type=cache,target=${UV_CACHE_DIR} \
uv sync --active --frozen --no-group dev --package application
RUN useradd -u 1001 snapp && chown -R snapp:snapp /build
USER snapp
ENTRYPOINT ["sh", "/build/application/start.sh"]
</code></pre>
<p>This is the <code>pyproject.toml</code> for the service file</p>
<pre class="lang-toml prettyprint-override"><code>[project]
name = "application"
version = "1.0.0"
description = "Contains all business logics related to application service"
authors = [{ email = "john@test.com" }]
requires-python = "==3.12.*"
dependencies = [
"baselib==0.1.0",
"jsonschema==4.22.0",
]
[tool.uv]
package = false
[tool.uv.sources]
baselib = { workspace = true }
[dependency-groups]
dev = [
"ruff==0.13.0",
"bandit>=1.8.6",
"pytest>=8.4.2",
"pytest-cov>=7.0.0",
]
[tool.pytest.ini_options]
pythonpath = ["."]
addopts = "--cov=. --cov-report=xml"
testpaths = ["tests"]
</code></pre>
<p>Running <code>uv cache dir</code> inside the container returns the value <code>/application/cache/uv</code>. However, as mentioned in the doc, the cache directory is not created inside the container as well as in the host system.</p>
<ol>
<li><p>Is this the right behaviour? I'm guessing the cache mount should exist somewhere. I have also been able to verify this because the packages keep downloading from the source and not used from the cache. This is the docker build command <code>docker build -f application/Dockerfile -t apps . --progress=plain</code>. I've added <code>tokenizers</code> as a dependency to test the cache and it starts to download all the package and I assumed the cache would install only the new package.</p>
<p>Snippet that all the dependent packages are being downloaded.</p>
<pre class="lang-none prettyprint-override"><code>#11 11.11 Downloading google-api-python-client
#11 11.53 Downloading uv
#11 11.85 Downloading pymupdf
#11 12.14 Downloading botocore
#11 15.23 Built uwsgi==2.0.28
#11 15.24 Prepared 157 packages in 14.27s
#11 15.57 Installed 157 packages in 330ms
</code></pre>
</li>
<li><p>My existing setup consists of using Jenkins as the CI/CD server running on k8s. All the jenkins build agents are therefore ephemeral and I won't be able to use the host file system for the uv cache. Only if above problem is resolved, I'll be able to check and see if I can mount a k8s pvc as a <code>UV_CACHE_DIR</code></p>
</li>
</ol>
<p>Platform: macOS 15.4 x86_64</p>
<p>Version: uv 0.8.17</p>
|
<python><docker><jenkins><uv>
|
2025-10-09 11:48:34
| 0
| 617
|
Vaibhav
|
79,786,206
| 988,279
|
confluent-kafka receive messages from last x days
|
<p>I've a topic in kafka (only 1 partition) and want do receive the messages within the last <em>x</em> days. How does this work?</p>
<p>Working consumer code for fetching all messages:</p>
<pre><code>from confluent_kafka import Consumer
import json
config = {
'bootstrap.servers': '0.0.0.0:9092',
'group.id': 'my-python-consumer',
'auto.offset.reset': 'earliest'
}
consumer = Consumer(config)
consumer.subscribe(['my_topic'])
while True:
msg = consumer.poll(1.0)
if msg is not None:
json_data_1 = json.loads(msg.value().decode('utf-8'))
print(json_data_1['payload']['after'])
</code></pre>
|
<python><apache-kafka>
|
2025-10-09 09:03:02
| 0
| 522
|
saromba
|
79,786,115
| 9,338,724
|
How to ignore certain exceptions in VSCode Debugger, when "Uncaught Exceptions" breakpoint turned on
|
<p>While working with Pyhthon debugger on VSCode, I would like to ignore "HTTPException" from triggering a breakpoint in debug mode, while "Uncaught Exceptions" is turned on.</p>
|
<python><visual-studio-code>
|
2025-10-09 07:07:06
| 0
| 427
|
abyesilyurt
|
79,785,999
| 3,294,994
|
Type hint a decorator to enforce matching signature as the decorated function
|
<p>How can I implement <code>DecoratorFactory</code> such that it type-checks as follows:</p>
<pre class="lang-py prettyprint-override"><code>def accepts_foo(foo: int): ...
def accepts_bar(bar: int): ...
decorator_foo = DecoratorFactory(foo=1)
decorator_foo(accepts_foo) # okay because they both accept foo
decorator_foo(accepts_bar) # type error because the params are different.
</code></pre>
<p>I suppose <code>DecortorFactor</code>'s <code>__init__</code> should only accept kwargs.</p>
<hr />
<p>The closest I got was:</p>
<pre><code>from __future__ import annotations
from typing import Callable, Generic, ParamSpec, TypeVar
from typing_extensions import reveal_type
T = TypeVar("T", covariant=True)
P = ParamSpec("P")
_NOT_SET = object()
class _Decorator(Generic[P, T]):
def __init__(self, f: Callable[P, T]) -> None:
self.f = f
self._result: T = _NOT_SET # type: ignore
def __call__(self, *args: P.args, **kwargs: P.kwargs) -> T:
result = self.f(*args, **kwargs)
self._result = result
return result
@property
def result(self) -> T:
assert self._result is not _NOT_SET
return self._result
class _DecoratorFactory(Generic[P]):
def __init__(self, *a: P.args, **k: P.kwargs) -> None:
self.a = a
self.k = k
def __call__(self, func: Callable[P, T]) -> _Decorator[P, T]:
return _Decorator(func)
class DecoratorFactoryFactory(Generic[P]):
def __init__(self) -> None:
pass
def __call__(self, *a: P.args, **k: P.kwargs) -> _DecoratorFactory[P]:
return _DecoratorFactory(*a, **k)
def accepts_foo(foo: int) -> float: ...
def accepts_bar(bar: int) -> str: ...
DecoratorFactory = DecoratorFactoryFactory()
decorator_foo = DecoratorFactory(foo=1)
accepts_foo = decorator_foo(accepts_foo)
reveal_type(accepts_foo(42))
reveal_type(accepts_foo.result)
accepts_bar = decorator_foo(accepts_bar)
reveal_type(accepts_bar(42))
reveal_type(accepts_bar.result)
</code></pre>
<p>(I had to do the "factory factory", otherwise I was getting a bunch of error like <code>error: Arguments for ParamSpec "P@Kallable" are missing (reportCallIssue)</code>)</p>
<p>pyright:</p>
<pre><code>$ pyright so.py
/so.py
/so.py:54:13 - information: Type of "accepts_foo(42)" is "float"
/so.py:55:13 - information: Type of "accepts_foo.result" is "float"
/so.py:58:13 - information: Type of "accepts_bar(42)" is "str"
/so.py:59:13 - information: Type of "accepts_bar.result" is "str"
0 errors, 0 warnings, 4 informations
</code></pre>
<p>I'd like <code>decorator_foo(accepts_bar)</code> to type fail, because the signatures don't match.</p>
|
<python><python-typing><pyright>
|
2025-10-09 02:56:54
| 1
| 846
|
obk
|
79,785,937
| 1,447,953
|
Why doesn't Pandas concat do a copy when one of the dataframes is empty?
|
<p>Consider this example:</p>
<pre><code>import pandas as pd
df_part1 = pd.DataFrame()
df_part2 = pd.DataFrame({'A': [1,1], 'B': [3,4]})
df_concat_out = pd.concat([df_part1, df_part2])
print("id(df_part2.values) == id(df_concat_out.values):", id(df_part2.values) == id(df_concat_out.values))
df_part2.B *= -1
df_concat_out_2 = pd.concat([df_concat_out, df_part2])
print(df_concat_out_2)
</code></pre>
<p>The output is</p>
<pre><code>id(df_part2.values) == id(df_concat_out.values): True
A B
0 1 -3
1 1 -4
0 1 -3
1 1 -4
</code></pre>
<p>Is this the expected behaviour? It is not expected to me at least. The default <code>copy</code> parameter value for <code>concat</code> is supposed to be <code>True</code>, but clearly it is not doing a copy in this case. Why? I guess it is trying to be clever and a copy doesn't necessarily seem necessary here, but I think this example shows that it definitely IS necessary.</p>
<p>This is on pandas 2.1.4. Haven't checked the very latest version.</p>
|
<python><pandas><dataframe>
|
2025-10-08 23:57:00
| 2
| 2,974
|
Ben Farmer
|
79,785,799
| 9,890,009
|
How to search by multiple fields on django_opensearch_dsl
|
<p>I have an opensearch server in which I want to search items and apply some filters to the search:</p>
<pre><code>search = Item.search().query("match", name="test")
</code></pre>
<p>I need to search items by multiple filters, like name, date, location, etc. For this I will need some other kind of queries like "range" or "terms".</p>
<p>Now the issue is I've trying using <code>opensearch-dsl</code> package like this:</p>
<pre><code> search_1 = Q("match", name="test")
search_2 = Q("terms", name="location")
search_3 = Q("range", name="date")
filters = [search_1, search_2, search_3]
query = Q("bool", should=filters)
search = Item.search().query(query)
</code></pre>
<p>This is not working, constantly returning errors like:</p>
<pre><code>{"error":"unhashable type: 'Bool'"}
</code></pre>
<p>Event if I try to run the query individually like this:</p>
<pre><code> query = Q("match", name="test")
search = Item.search().query(query)
</code></pre>
<p>How can I do a search by multiple fields?</p>
|
<python><django><opensearch>
|
2025-10-08 19:19:34
| 1
| 792
|
Paul Miranda
|
79,785,678
| 1,574,952
|
Shape of sliced array from shape of array and slice
|
<p>If I know the shape of a numpy array like <code>(1000, 50)</code>, and I have an arbitrary selection expressed as an <code>IndexExpression</code>, let's say <code>np.s_[:200, :]</code>, how can I evaluate the shape of the sliced array (in this example <code>(200, 50)</code>), without actually constructing the array, applying the slice and checking the resulting shape?</p>
<p>Worth noting that <code>IndexExpression</code>s are often slices, but not necessarily, for example:</p>
<pre><code>>>> np.s_[None, 1:2:3, ..., 0]
(None, slice(1, 2, 3), Ellipsis, 0)
</code></pre>
<p>Finally, a bit of context for this: I am actually dealing with an h5py <code>Dataset</code> where I can query the shape before actually allocating an array or reading anything from file. I'm going to read a subset of the data (selected with an <code>IndexExpression</code>) and want to know in advance what the shape is going to be.</p>
|
<python><numpy><numpy-slicing>
|
2025-10-08 16:19:51
| 1
| 364
|
Kyle
|
79,785,634
| 14,649,310
|
How to get headers from a request in Streamlit
|
<p>Hello we have a Streamlit app deployed in Azure and we want to do the azure authentication. Part of it is that we receive the users info after authorization in these headers:</p>
<pre><code>X-MS-CLIENT-PRINCIPAL-NAME: User's name/email
X-MS-CLIENT-PRINCIPAL-ID: User's unique ID
</code></pre>
<p>as described <a href="https://learn.microsoft.com/en-us/azure/container-apps/authentication" rel="nofollow noreferrer">here in Microsoft official docs</a></p>
<p>But I cant seem to be able to find anywhere a way to get the headers inside teh Streamlit app when I search for it I find various hacks like for example in <a href="https://stackoverflow.com/questions/78348427/extract-bearertoken-from-http-headers-in-streamlit">this previous question</a> they pass the headers before the streamlit app as query parameters but thats not possible in our case we dont have a seperate UI (and also weird way to do it if I may).</p>
<p>Any idea how can this be done? CahtGPT also seems to not know.</p>
|
<python><azure><streamlit><azure-authentication>
|
2025-10-08 15:38:33
| 1
| 4,999
|
KZiovas
|
79,785,601
| 577,288
|
Cannot start a ThreadPoolExecutor inside a threading.Thread function
|
<p>In this example, Tkinter GUI starts <code>ThreadPoolExecutor</code>. But <code>ThreadPoolExecutor</code> is inside a <code>threading.Thread</code> function. The thread function says it's finished
before <code>ThreadPoolExecutor</code> has started ... and returns this error.</p>
<pre><code>RuntimeError: can't register atexit after shutdown
</code></pre>
<p>Here is the code:</p>
<pre><code>import time
import threading
import tkinter as tk
import concurrent.futures
import random
def thread1(_details, lock):
print('started ' + _details[2])
time.sleep(random.randint(1, 10))
print('finished ' + _details[2])
return _details
def workpool():
file_lock = threading.Lock()
list1 = [['0', 'pending', 'name: test1'], ['1', 'pending', 'name: test2'], ['2', 'pending', 'name: test3'], ['4', 'pending', 'name: test4'], ['7', 'pending', 'name: test5'], ['8', 'pending', 'name: test6']]
print('thread running')
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
working_threads = {executor.submit(thread1, _details, file_lock): _details for index1, _details in enumerate(list1)}
for future in concurrent.futures.as_completed(working_threads):
current_result = future.result()
print('threads done')
def launch_threads():
workpool()
def new_button():
threading.Thread(name='Launch_ThreadPoolExecutor', target=launch_threads).start()
def main_thread():
root = tk.Tk()
root.geometry("+{}+{}".format(650, 50))
root.geometry("{}x{}".format(200, 200))
new_bt = tk.Button(root, text="New", command=new_button, height=2, padx=10, pady=5, width=15, wraplength=100)
new_bt.place(x=40, y=40, height=30, width=80)
root.mainloop()
threading.Thread(name='GUI_thread', target=main_thread).start()
</code></pre>
<p>Here is the full traceback:</p>
<pre><code>Exception in thread Launch_ThreadPoolExecutor:
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 1041, in _bootstrap_inner
self.run()
~~~~~~~~^^
File "C:\Users\User\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 992, in run
self._target(*self._args, **self._kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\GOOD\Coding\.Coding_Projects\test3.py", line 24, in workpool
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python313\Lib\concurrent\futures\__init__.py", line 50, in __getattr__
from .thread import ThreadPoolExecutor as te
File "C:\Users\User\AppData\Local\Programs\Python\Python313\Lib\concurrent\futures\thread.py", line 37, in <module>
threading._register_atexit(_python_exit)
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 1503, in _register_atexit
raise RuntimeError("can't register atexit after shutdown")
RuntimeError: can't register atexit after shutdown
</code></pre>
|
<python><multithreading><threadpool><python-multithreading>
|
2025-10-08 15:11:51
| 1
| 5,408
|
Rhys
|
79,785,579
| 4,047,679
|
How to identify system/external dependencies of installed python packages
|
<p>We're trying to offload checking if our python environment can install on our server to a GitHub action. What I would like to know is if there's a way of knowing which python packages in the environment require external/system dependencies (for example, gdal) so we could specify them in the image to use for the action.</p>
|
<python><pip><github-actions><dependency-management>
|
2025-10-08 14:51:02
| 1
| 2,869
|
raphael
|
79,785,530
| 8,297,745
|
Closing file after SFTP upload still makes client retry the same upload in `close` sftp_response from paramiko
|
<p>I run a custom SFTP Server using Paramiko that receives video files from NVR cameras and uploads it to a cloud storage. The flow is simple: The client (NVR) connects and authenticates, we write the incoming file locally, then push it to Google Cloud Storage, and remove the local copy. After that the connection should close and the NVR should stop trying to resend the same file. What I'm seeing is that instead some NVRs keep retrying the exact same upload in a loop right after we delete the local file, which floods the server with repeated writes.</p>
<p>Here is a reduced version of the handler that closes the connection right after upload:</p>
<pre class="lang-py prettyprint-override"><code>import os, logging, paramiko
class SFTPFileHandle(paramiko.SFTPHandle):
def __init__(self, flags, upload_file, file_name, file_path, rename_file) -> None:
super().__init__(flags)
self.upload_file = upload_file
self.file_name = file_name
self.file_path = file_path
self.rename_file = rename_file
def chattr(self, path, attr):
pass
def stat(self):
return paramiko.SFTPAttributes.from_stat(os.fstat(self.readfile.fileno()))
def close(self):
try:
super().close()
try:
exists = os.path.exists(self.file_path)
size = os.path.getsize(self.file_path) if exists else -1
logging.info(f"[Close] path='{self.file_path}' exists={exists} size={size}")
except Exception as e:
logging.warning(f"[Close] stat failed for '{self.file_path}': {e}")
fname = self.rename_file(file_name=self.file_name, file_path=self.file_path)
if fname:
if fname == "camera_off":
try:
os.remove(self.file_path)
logging.info("[Cleanup] removed local file (camera_off)")
except Exception as e:
logging.warning(f"[Cleanup] remove failed: {e}")
else:
ok = self.upload_file(fname, self.file_path)
if ok:
try:
os.remove(self.file_path)
logging.info("[Cleanup] removed local file after upload")
except Exception as e:
logging.warning(f"[Cleanup] remove failed after upload: {e}")
else:
logging.error("[Close] upload failed")
else:
logging.error("[Close] failed to build final name")
except Exception:
logging.exception("[Close] error while finalizing")
</code></pre>
<p>I expected that once <code>close()</code> finished without raising any errors, the client would see the transfer as successful and stop retrying. Instead, the NVR keeps attempting the upload immediately and we observe duplicate uploads of the same file. Server logs do not show a protocol error and the upload to GCS happens successfully, followed by the removal of the local file.</p>
<p>This suggests my client didn't receive a clear success signal on close...</p>
<p>PS: If anyone wonders, like I wondered myself, unfortunately, no, it's not possible to access logs on the NVR to see what is happening after the upload is complete. The NVRs are potato cameras with little to no management at all, with embedded software that only allows to change the DNS of the SFTP Server and no other option available.</p>
|
<python><ssh><sftp><paramiko>
|
2025-10-08 13:52:32
| 1
| 849
|
Raul Chiarella
|
79,785,458
| 10,975,692
|
Why does multiprocess with "fork" fail under Python 3.14 but work in 3.13 (works only with "spawn" and "forkserver")?
|
<p>The following code works fine on <strong>Python 3.13</strong>, but fails on <strong>Python 3.14</strong> with a <code>RuntimeError</code> related to asyncio tasks.</p>
<p>If I switch the multiprocessing start method from <code>"fork"</code> to <code>"spawn"</code>, the code works again — but <code>"spawn"</code> is too slow for some use cases.</p>
<p>Is there another way to make this work under Python 3.14 without changing the start method?</p>
<hr />
<h3>Minimal reproducible example</h3>
<pre class="lang-py prettyprint-override"><code>import asyncio
import inspect
from functools import wraps
from typing import Any, Awaitable, Callable, Union
import pytest
from multiprocess import Pipe, Process
from multiprocess.connection import Connection
import multiprocess as mp
mp.set_start_method(method="fork", force=True) # "spawn" works fine
class SubprocessError:
def __init__(self, ex: Exception) -> None:
self.exception = ex
def in_subprocess[T](func: Callable[..., Union[T, Awaitable[T]]]) -> Callable[..., Awaitable[T]]:
@wraps(func)
async def wrapper(*args: Any, **kwargs: Any) -> T:
return await calculate_in_subprocess(func, *args, **kwargs)
return wrapper
async def calculate_in_subprocess[T](func: Callable[..., Union[T, Awaitable[T]]], *args: Any, **kwargs: Any) -> T:
rx, tx = Pipe(duplex=False) # receiver & transmitter ; Pipe is one-way only
process = Process(target=_inner, args=(tx, func, *args), kwargs=kwargs)
process.start()
event = asyncio.Event()
loop = asyncio.get_event_loop()
loop.add_reader(fd=rx.fileno(), callback=event.set)
if not rx.poll(): # do not use process.is_alive() as condition here
await event.wait()
loop.remove_reader(fd=rx.fileno())
event.clear()
result = rx.recv()
process.join() # this blocks synchronously! make sure that process is terminated before you call join()
rx.close()
tx.close()
if isinstance(result, SubprocessError):
raise result.exception
return result
def _inner[T](tx: Connection, fun: Callable[..., Union[T, Awaitable[T]]], *a, **kw_args) -> None:
event_loop = None
if inspect.iscoroutinefunction(fun):
event_loop = asyncio.new_event_loop()
asyncio.set_event_loop(event_loop)
try:
if event_loop is not None:
res = event_loop.run_until_complete(fun(*a, **kw_args))
else:
res = fun(*a, **kw_args)
except Exception as ex:
tx.send(SubprocessError(ex=ex))
else:
tx.send(res)
@pytest.mark.asyncio
async def test_in_subprocess_simple_async():
@in_subprocess
async def f() -> int:
return 42
assert await f() == 42
</code></pre>
<hr />
<h3>Error message with Python 3.14</h3>
<pre><code>-------------------------------- live log call ---------------------------------
ERROR asyncio:base_events.py:1875 Exception in callback <_asyncio.TaskStepMethWrapper object at 0x7e71ba729ff0>()
handle: <Handle <_asyncio.TaskStepMethWrapper object at 0x7e71ba729ff0>()>
Traceback (most recent call last):
File "/usr/lib/python3.14/asyncio/events.py", line 94, in _run
self._context.run(self._callback, *self._args)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Cannot enter into task <Task pending name='Task-2' coro=<test_in_subprocess_simple_async.<locals>.f() running at foo.py:73> cb=[_run_until_complete_cb() at /usr/lib/python3.14/asyncio/base_events.py:181]> while another task <Task pending name='Task-1' coro=<test_in_subprocess_simple_async() running at foo.py:77> cb=[_run_until_complete_cb() at /usr/lib/python3.14/asyncio/base_events.py:181]> is being executed.
</code></pre>
<hr />
<h3>Environment</h3>
<p>Installed packages (note: <code>multiprocess</code> must be installed from GitHub):</p>
<pre><code>certifi==2025.10.5
charset-normalizer==3.4.3
dill==0.4.0
docker==7.1.0
idna==3.10
iniconfig==2.1.0
multiprocess @ git+https://github.com/uqfoundation/multiprocess.git@02ea4bd36cac5013d70847815c92e1a736ef4a05
packaging==25.0
pluggy==1.6.0
Pygments==2.19.2
pytest==8.4.2
pytest-asyncio==1.2.0
pytest_docker_tools==3.1.9
requests==2.32.5
urllib3==2.5.0
</code></pre>
<hr />
<h3>Question</h3>
<p>Why does this <code>RuntimeError</code> occur under Python 3.14 with <code>fork</code>, and is there a way to fix it <strong>without switching to <code>spawn</code> or <code>forkserver</code></strong>?</p>
<h3>Edit 1: Trying both libs with all subprocess modes</h3>
<p>On Python 3.14, I get the following results when I ran my code:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Library</th>
<th>Fork</th>
<th>Spawn</th>
<th>Forkserver</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>multiprocess</code></td>
<td>✗ (RuntimeError: Cannot enter into task)</td>
<td>✓</td>
<td>✓</td>
</tr>
<tr>
<td><code>multiprocessing</code></td>
<td>✗ (RuntimeError: Cannot enter into task)</td>
<td>✗ (PicklingError, see below)</td>
<td>✗ (PicklingError, see below)</td>
</tr>
</tbody>
</table></div>
<pre><code> def dump(obj, file, protocol=None):
'''Replacement for pickle.dump() using ForkingPickler.'''
> ForkingPickler(file, protocol).dump(obj)
E _pickle.PicklingError: Can't pickle local object <function test_in_subprocess_simple_async.<locals>.f at 0x7b3f34cfcb40>
E when serializing tuple item 1
E when serializing dict item '_args'
E when serializing multiprocessing.context.Process state
E when serializing multiprocessing.context.Process object
</code></pre>
<p>So <code>mutliprocessing</code> does <strong>not</strong> work, because <code>pickle</code> is not as good as <code>dill</code>.
And <code>spawn</code> and <code>forkserver</code> work fine. So my question is: Is there a way to make also <code>fork</code> work?</p>
|
<python><python-3.x><python-asyncio><multiprocess><python-3.14>
|
2025-10-08 12:36:47
| 2
| 1,500
|
DarkMath
|
79,785,397
| 13,806,869
|
Why does Pandas not recognise my sqlalchemy connection engine?
|
<p>I'm trying to connect to an IBM DB2 database from Python. I'm using Python 3.12.10, SQLAlchemy 1.4.54, and Pandas 2.3.2. This is what my code looks like:</p>
<pre><code>import os
import sqlalchemy
import pandas as pd
from keyring import get_credential
if os.name == "nt":
os.add_dll_directory(os.path.join(os.getenv("IBM_DB_HOME"), "bin"))
hostname = #my hostname
port = #my port number
database = #my database
engine = sqlalchemy.create_engine(
f"db2+ibm_db://"
f"{get_credential('my saved credentials', None).username}"
f":{get_credential('my saved credentials', None).password}"
f"@{hostname}:{port}/{database}")
df = pd.read_sql_query(
sql = f"""
--my SQL query
;""",
con = engine)
</code></pre>
<p>However, this returns the following error message:</p>
<blockquote>
<p>UserWarning: pandas only supports SQLAlchemy connectable
(engine/connection) or database string URI or sqlite3 DBAPI2
connection. Other DBAPI2 objects are not tested. Please consider using
SQLAlchemy.</p>
</blockquote>
<p>Does anyone know why this is happening and what I can do to fix it please? I am using a SQLAlchemy engine for the 'con' parameter in pd.read_sql_query, as the error message advises me to do, so I'm not sure what the problem is. I've confirmed that the hostname, port, database name, and my credentials are all correct.</p>
|
<python><pandas><sqlalchemy><db2>
|
2025-10-08 11:19:04
| 1
| 521
|
SRJCoding
|
79,785,354
| 2,276,054
|
Injecting build date automatically when building WHL with setuptools?
|
<p>I have a simple Python project that is often updated, so I need to track its version number and display it in the runtime. I store version and build date in <code>__init__.py</code> in project's root:</p>
<pre class="lang-py prettyprint-override"><code>__version__ = "1.0.6"
__date__ = "2025-10-08 18:33"
</code></pre>
<p>This works well, however, when I'm just about to build the wheel file, I need to update these 2 values manually. For <code>__version__</code> I don't mind, but can <code>__date__</code> be somehow set automatically?</p>
<p>I build my project simply with <code>python -m build --wheel</code>. Below are the relevant sections of my <code>pyproject.toml</code>. I don't have <code>setup.py</code> file at all.</p>
<pre class="lang-toml prettyprint-override"><code>[project]
name = "MyProject"
dynamic = ["version"]
[build-system]
requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"
[tool.setuptools.packages.find]
where = ["src"]
[tool.setuptools.package-data]
"*" = ["*.html", "*.ini", "*.json"]
[tool.setuptools.dynamic]
version = { attr = "myproject.__version__" }
</code></pre>
<p>Or maybe there's another way, like checking timestamps of <code>dist-info</code> meta files inside the WHL package?</p>
|
<python><python-3.x><setuptools><pyproject.toml>
|
2025-10-08 10:33:52
| 2
| 681
|
Leszek Pachura
|
79,785,313
| 2,891,692
|
Logging without year:month:day but with hours:minutes:seconds,milliseconds
|
<p>I try to get logging setup
<code>hours:minutes:seconds,milliseconds</code></p>
<p>Example: <code>11:28:51,284</code></p>
<p>Actually I get
<code>2025-10-08 11:44:07,658</code></p>
<p>This is the source with which I try to do it:</p>
<pre><code># --- Logging Setup ---
# --- Logging Setup ---
# Manual configuration for maximum robustness.
# 1. Get the root logger.
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# 2. Clear any pre-existing handlers to prevent duplicates.
if logger.hasHandlers():
logger.handlers.clear()
# 3. Create a shared formatter.
# log_formatter = logging.Formatter('%(asctime)s - %(levelname)-8s - %(message)s')
from datetime import datetime
def formatTime(self, record, datefmt=None):
return datetime.fromtimestamp(record.created).astimezone().isoformat(timespec='milliseconds')
log_formatter = logging.Formatter.formatTime = formatTime
# 4. Create, configure, and add the File Handler.
file_handler = logging.FileHandler(f'{SCRIPT_DIR}/log/dictation_service.log', mode='w')
file_handler.setFormatter(log_formatter)
logger.addHandler(file_handler)
# 5. Create, configure, and add the Console Handler.
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(log_formatter)
logger.addHandler(console_handler)
# The filter is innocent, but we leave it out for now for the cleanest possible test.
logger.handlers[0].addFilter(WindowsEmojiFilter())
</code></pre>
<p>I tried to learn from: <a href="https://stackoverflow.com/questions/6290739/python-logging-use-milliseconds-in-time-format">Python logging: use milliseconds in time format</a></p>
<p>I get this error:</p>
<pre><code>AttributeError: 'function' object has no attribute 'format'
Call stack:
File "/home/seeh/projects/py/STT/scripts/../dictation_service.py", line 325, in <module>
validate_setup(SCRIPT_DIR, logger)
File "/home/seeh/projects/py/STT/scripts/py/func/checks/setup_validator.py", line 18, in validate_setup
logger.info("INFO: Running setup validation...")
Message: 'INFO: Running setup validation...'
Arguments: ()
--- Logging error ---
Traceback (most recent call last):
File "/usr/lib/python3.13/logging/__init__.py", line 1151, in emit
msg = self.format(record)
File "/usr/lib/python3.13/logging/__init__.py", line 999, in format
return fmt.format(record)
</code></pre>
|
<python><python-logging>
|
2025-10-08 09:57:43
| 1
| 2,630
|
SL5net
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.