QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,749,806
10,416,012
How to typehint functools.partial classes
<p>I'm struggling to type hint correctly a python dataclass (or any class) with partial initialization, it seems that the type is lost somewhere but not sure where:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from functools import partial @dataclass class Test: a : int b : int @classmethod def test_partial[T: Test](cls: type[T], b:int) -&gt; partial[T]: return partial(cls, b=b) @classmethod def test_partial2(cls, b:int): return partial(cls, b=b) part = Test.test_partial(3) part( # linter suggest nothing when passing here part2 =partial(Test, b=3) part2( # linter suggest a=...,b=... when passing # interestingly enough letting out the typehint, the linter knows again part = Test.test_partial2(3) part( # linter suggest a=...,b=... when passing </code></pre> <p>The interesting thing is that pyright get's it right when I don't give it to them, so probably there is a way. Any clue?</p> <p>PD: regarding to the similar question, notice that I'm not asking how to typehint in a mypy compliant maner (no error) but in a maner that tells pyright (or mypy) the arguments that are left.</p> <p>so solutions such as -&gt; Any: or -&gt; partial[...] may work in the other case but not in this one. and also protocols are not ok as need to be dynamic.</p>
<python><python-typing><partial>
2025-08-29 03:54:44
1
2,235
Ziur Olpa
79,749,770
6,514,559
Pandas read_csv: Skip rows contains invalid data that can cause data_type parsing errors
<p>The csv file can contain string values to certain integer columns and I want to ignore/handle via callback if that happens, tried using <code>on_bad_lines='skip/warn'</code> however it gets triggered only on wherever there are parsing issues due to delimiter/num of columns mismatch. Here is the code I am using</p> <p>csv:</p> <pre><code>id,name,age 1,,&quot;25&quot; 2,&quot;&quot;,30 jj,John, </code></pre> <pre><code>import pandas as pd schema = {'id':'int64','name':'str','age':'int'} df = pd.read_csv('a.csv', dtype=schema, on_bad_lines='warn', engine='python') </code></pre>
<python><pandas>
2025-08-29 02:27:39
2
774
Despicable me
79,749,646
13,014,864
Create unique index for each group PySpark
<p>I am working with a relatively large dataframe (close to 1 billion rows) in PySpark. This dataframe is in &quot;long&quot; format, and I would like to have a unique index for each group defined by a groupBy over multiple columns. An example dataframe:</p> <pre><code>+--------------+-------+---------+------+------+ |classification| id_1| id_2| t| y| +--------------+-------+---------+------+------+ | 1| person| Alice| 0.1| 0.247| | 1| person| Alice| 0.2| 0.249| | 1| person| Alice| 0.3| 0.255| | 0| animal| Jaguar| 0.1| 0.298| | 0| animal| Jaguar| 0.2| 0.305| | 0| animal| Jaguar| 0.3| 0.310| | 1| person| Chris| 0.1| 0.267| +--------------+-------+---------+------+------+ </code></pre> <p>Here I would like to perform an operation such that I can index each group of <code>[&quot;classification&quot;, &quot;id_1&quot;, &quot;id_2&quot;]</code>. Example output is:</p> <pre><code>+--------------+-------+---------+------+------+----+ |classification| id_1| id_2| t| y| idx| +--------------+-------+---------+------+------+----+ | 1| person| Alice| 0.1| 0.247| 1| | 1| person| Alice| 0.2| 0.249| 1| | 1| person| Alice| 0.3| 0.255| 1| | 0| animal| Jaguar| 0.1| 0.298| 2| | 0| animal| Jaguar| 0.2| 0.305| 2| | 0| animal| Jaguar| 0.3| 0.310| 2| | 1| person| Chris| 0.1| 0.267| 3| +--------------+-------+---------+------+------+----+ </code></pre> <p>I cannot use <code>monotonically_increasing_id()</code> since I don't want a unique ID per row. What I've done, hopefully as a stop-gap, is to create another dataframe, create unique indices for each group, then join that dataframe back into the original.</p> <pre><code>from pyspark.sql import functions as F df_groups = ( df .select(&quot;classification&quot;, &quot;id_1&quot;, &quot;id_2&quot;) .dropDuplicates() .withColumn( &quot;idx&quot;, F.monotonically_increasing_id() ) ) df = df.join(other=df_groups, on=[&quot;classification&quot;, &quot;id_1&quot;, &quot;id_2&quot;]) </code></pre> <p>This can be a pretty hefty operation, so I'm wondering if there is any native Spark operation that effectively does the same thing.</p>
<python><apache-spark><pyspark>
2025-08-28 21:58:15
2
931
CopyOfA
79,749,636
1,275,942
Redirect stdout at a file-descriptor level to silence noisy module import
<p>I have a module that I need to use. However, when it is imported, it helpfully prints out some status information.</p> <pre class="lang-py prettyprint-override"><code>&gt; import problematic_module Connected to local cache! Detected version 1.0.0! &lt;several more lines of startup info&gt; &gt; </code></pre> <p>The offending text is coming from stdout:</p> <pre class="lang-bash prettyprint-override"><code>$ python -c &quot;import problematic_module&quot; 1&gt; stdout.txt 2&gt; stderr.txt $ cat stdout.txt Connected to local cache! Detected version 1.0.0! &lt;several more lines of startup info&gt; $ cat stderr.txt $ </code></pre> <p>I want to suppress this, so a CLI tool does not have unnecessary cruft surrounding the output. There are <a href="https://stackoverflow.com/questions/2125702/how-to-suppress-console-output-in-python">many questions</a> about how to suppress console output, and a popular solution is something like this:</p> <pre><code>import contextlib import os import sys def noisy_code(): print(&quot;hello from print!&quot;) sys.stdout.write(&quot;hello from stdout!\n&quot;) sys.__stdout__.write(&quot;hello from __stdout__!\n&quot;) with os.fdopen(1, 'w') as fd_1: fd_1.write(&quot;hello from fdopen(1)!\n&quot;) @contextlib.contextmanager def silence_stdout(): try: old_stdout = sys.__stdout__ with open(os.devnull, &quot;w&quot;) as devnull: with contextlib.redirect_stdout(devnull): sys.__stdout__ = devnull # PLEASE BE QUIET. yield finally: sys.__stdout__ = old_stdout with silent_legacy_startup(): noisy_code() </code></pre> <p><code>noisy_code()</code> here is a testbench that replicates various ways to write to stdout. <code>silence_stdout</code> redirects both <code>sys.stdout</code> and <code>sys.__stdout__</code> to null. This approach silences much of the output from my noisy module import. But a few lines still manage to print.</p> <p>If you run this with <code>noisy_code</code>, the <code>fdopen()</code> line is unaffected by the redirection:</p> <pre><code>hello from fdopen(1)! </code></pre> <p>My assumption is that the module import runs some C initialization code, and that code <code>printf()</code>'s something, which would totally disregard python's <code>sys.stdout</code>.</p> <h3>Partial solution:</h3> <p>Since shell redirection silences the output, the obvious step seems to be to redirect fd 1 itself. At least on POSIX, I'd want to do something like:</p> <ul> <li><code>new_fd = dup(1)</code></li> <li><code>dup2(DEVNULL, 1)</code></li> <li>Perform the module import. The proc's fd 1, STDOUT, is fully redirected to DEVNULL.</li> <li><code>dup2(new_fd, 1)</code> to restore stdout once done.</li> </ul> <p>Here's my attempt at that:</p> <pre><code>@contextlib.contextmanager def redirect_stdout_fd(): try: stdout_fd = sys.stdout.fileno() dup_stdout = os.dup(stdout_fd) with open(os.devnull, &quot;w&quot;) as devnull: os.dup2(devnull.fileno(), stdout_fd) yield finally: os.dup2(dup_stdout, stdout_fd) os.close(dup_stdout) </code></pre> <p>Unfortunately, I'm developing this for Windows, where this crashes, printing:</p> <pre><code>Traceback (most recent call last): ... print(&quot;hello from print!&quot;) OSError: [WinError 1] Incorrect function </code></pre> <p>My guess is that this style of dup2 swapping is a unix-only trick and I'm missing the windows equivalent.</p> <h3>Additional test cases:</h3> <p>Windows API WriteFile:</p> <pre><code>if os.name == &quot;nt&quot;: import msvcrt import ctypes from ctypes import wintypes kernel32 = ctypes.WinDLL(&quot;kernel32&quot;, use_last_error=True) SetStdHandle = kernel32.SetStdHandle SetStdHandle.argtypes = (wintypes.DWORD, wintypes.HANDLE) SetStdHandle.restype = wintypes.BOOL GetStdHandle = kernel32.GetStdHandle GetStdHandle.argtypes = (wintypes.DWORD,) GetStdHandle.restype = wintypes.HANDLE STD_OUTPUT_HANDLE = -11 WriteFile = kernel32.WriteFile WriteFile.argtypes = [ wintypes.HANDLE, wintypes.LPCVOID, wintypes.DWORD, ctypes.POINTER(wintypes.DWORD), wintypes.LPVOID ] WriteFile.restype = wintypes.BOOL def win_stdout_write(text: str): h_stdout = GetStdHandle(STD_OUTPUT_HANDLE) written = wintypes.DWORD(0) data = text.encode(&quot;utf-8&quot;) if not WriteFile(h_stdout, data, len(data), ctypes.byref(written), None): raise ctypes.WinError(ctypes.get_last_error()) # OSError: [WinError 6] The handle is invalid. </code></pre> <p>Saved sys.stdout (i.e. logging):</p> <pre><code>module_stdout = sys.stdout def write_to_module_stdout(string): # Causes OSError: [Errno 9] Bad file descriptor module_stdout.write(string) </code></pre> <h3>Relevant links</h3> <p>(Providing these as potential starting points if anybody is looking to chase down a better solution.)</p> <ul> <li><a href="https://bugs.python.org/issue30555" rel="nofollow noreferrer">https://bugs.python.org/issue30555</a></li> <li><a href="https://github.com/python/cpython/issues/68688" rel="nofollow noreferrer">https://github.com/python/cpython/issues/68688</a></li> </ul>
<python><windows><stdout>
2025-08-28 21:46:27
3
899
Kaia
79,749,580
1,253,006
How to retrieve one data value from the result of a pandas DataFrame.groupby().mean()
<p>Using Pandas 2.3.2 on Python 3.9.2 via JupyterLab.</p> <p>I've collected a bunch of thermal data from a thing. I've already collated that data into <code>DataFrame</code> chunks that look like this:</p> <pre><code> zone data Setpoint 9 zone1 40.34347 40 13 zone1 40.07553 40 17 zone1 39.98359 40 21 zone1 40.06895 40 25 zone1 40.04465 40 .. ... ... ... 952 zone4 109.91890 110 956 zone4 109.90520 110 960 zone4 110.00600 110 964 zone4 110.02160 110 968 zone4 109.94940 110 </code></pre> <p>Then I've used <code>groupby</code> and <code>mean()</code> to, well, group and create means:</p> <p><code>means = temps[['zone','Setpoint','data']].groupby(['zone','Setpoint']).mean()</code></p> <pre><code> data zone Setpoint zone1 40 40.050959 50 50.030125 60 60.066517 70 70.050257 80 80.045247 90 90.071484 100 100.032826 110 110.137990 zone3 40 39.990645 50 50.015407 60 60.053120 70 70.044470 80 80.043304 90 90.077433 100 100.070493 110 110.140510 zone4 40 40.048473 50 50.017906 60 60.037044 70 70.012458 80 80.034280 90 90.087850 100 100.047793 110 110.067390 </code></pre> <p><sub>(Aside: I notice how &quot;data&quot; is a line above &quot;zone&quot; and &quot;Setpoint&quot;, and I don't know what that's trying to tell me about the structure)</sub></p> <p>I can grab one zone's worth of info by doing <code>z1means = means.loc['zone1']</code>:</p> <pre><code> data Setpoint 40 40.050959 50 50.030125 60 60.066517 70 70.050257 80 80.045247 90 90.071484 100 100.032826 110 110.137990 </code></pre> <p>What I <em>can't</em> figure out is how to get at, e.g., the &quot;data&quot; value where &quot;Setpoint&quot; is 80:</p> <p><code>*some undiscovered syntax*</code></p> <p><code>80.045247</code></p> <p>I've come to understand that the &quot;Setpoint&quot; value is the <em>index</em> on the <code>DataFrame</code> object held in <code>z1means</code> but can't figure out how to refer to the data using it:</p> <pre><code>&lt;class 'pandas.core.frame.DataFrame'&gt; Index: 8 entries, 40 to 110 Data columns (total 1 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 data 8 non-null float64 dtypes: float64(1) memory usage: 128.0 bytes None </code></pre> <p>I can only imagine that the <code>Index: 8 entries, 40 to 110</code> is telling me something important, but I can't figure out what that is or how to use it.</p> <p>Things I've tried and their results:</p> <pre><code>z1means[80] -&gt; KeyError: 80 z1means.at(80) -&gt; TypeError: '_AtIndexer' object is not callable z1means.loc(80) -&gt; KeyError: 80, ValueError: No axis named 80 for object type DataFrame z1means.iloc(80) -&gt; Same as .loc z1means.loc(z1means['Setpoint'] == 80) -&gt; KeyError: Setpoint </code></pre> <p>What am I just not grasping here?</p>
<python><pandas>
2025-08-28 20:44:44
2
1,577
Brian A. Henning
79,749,522
6,514,559
Pandas read_csv, load empty/missing column values as NaN while loading empty string for quoted empty strings values in csv file
<p>My csv file contains empty string <code>&quot;&quot;</code> as well as missing column values <code>,,</code>. When i am loading with read_csv(), both are loaded as either empty string or NaN depending on keep_default_na and na_values configuration. I want to distinguish between these two different values such that missing column value is loaded as NaN and empty quoted string as Empty String.</p> <pre><code>id,name,age 1,,&quot;25&quot; 2,&quot;&quot;,3 3,John, </code></pre>
<python><pandas><string>
2025-08-28 19:34:02
3
774
Despicable me
79,749,411
4,518,341
What is NaT in Pandas?
<p>I have a dataframe with some &quot;NaT&quot; values in a datetime column. What does that mean?</p> <pre class="lang-none prettyprint-override"><code> project status completed 0 windows done 2025-08-20 1 doors done 2025-08-21 2 hvac delayed NaT </code></pre> <p>I checked the docs page <a href="https://pandas.pydata.org/docs/reference/api/pandas.NaT.html" rel="nofollow noreferrer">pandas.NaT</a>, but it just says &quot;alias of NaT&quot;, which doesn't seem to make sense.</p> <p>I found plenty of SO questions, but I didn't see one that actually explains what NaT is. E.g. <a href="https://stackoverflow.com/q/63859046/4518341">How do I check if a specific item in pandas series is NaT</a></p> <p><em><sub>This is meant to be a <a href="https://meta.stackoverflow.com/q/291992/4518341">canonical question</a> with a self-answer. I'm not genuinely asking for myself, just want to put this out there since the docs page is lacking, and someone asked this question in earnest <a href="https://stackoverflow.com/q/79744663/4518341" title="Find max value of a column, then find another value in the same row, and copy that value to a new column">here</a>: &quot;What are NaT? All of the time values should be datetime objects.&quot;</sub></em></p>
<python><pandas><datetime>
2025-08-28 17:08:43
1
33,775
wjandrea
79,749,374
2,398,143
'Failed to communicate with agent at http://127.0.0.1:5001 Tried multiple endpoint variations.', type=<ContentType.ERROR: 'error'>)
<p>I have written a sample script using python-a2a package and hitting the following error. I am not sure if the issue is with the specific version of the package.</p> <pre><code>A2ACalcClient initialized for URL: http://127.0.0.1:5001/a2a Sending calculation request: add(a=5, b=3) Unexpected response type from A2A agent: error Full response content: ErrorContent(message='Failed to communicate with agent at http://127.0.0.1:5001/a2a/a2a. Tried multiple endpoint variations.', type=&lt;ContentType.ERROR: 'error'&gt;) A2ACalcClient initialized for URL: http://127.0.0.1:5001 Sending calculation request: add(a=5, b=3) Unexpected response type from A2A agent: error Full response content: ErrorContent(message='Failed to communicate with agent at http://127.0.0.1:5001/tasks/send. Tried multiple endpoint variations.', type=&lt;ContentType.ERROR: 'error'&gt;) </code></pre> <p>Server code:</p> <pre><code>import math from python_a2a import ( A2AServer, Message, TextContent, FunctionCallContent, FunctionResponseContent, FunctionParameter, MessageRole, run_server, Task ) class CalculatorAgent(A2AServer): def handle_message(self, message): if message.content.type == &quot;text&quot;: return Message( content=TextContent( text=&quot;I'm a [Python A2A](python-a2a.html) calculator agent. You can call my functions:\n&quot; &quot;- calculate: Basic arithmetic (operation, a, b)\n&quot; &quot;- sqrt: Square root (value)&quot; ), role=MessageRole.AGENT, parent_message_id=message.message_id, conversation_id=message.conversation_id ) elif message.content.type == &quot;function_call&quot;: function_name = message.content.name params = {p.name: p.value for p in message.content.parameters} try: if function_name == &quot;calculate&quot;: operation = params.get(&quot;operation&quot;, &quot;add&quot;) a = float(params.get(&quot;a&quot;, 0)) b = float(params.get(&quot;b&quot;, 0)) if operation == &quot;add&quot;: result = a + b elif operation == &quot;subtract&quot;: result = a - b elif operation == &quot;multiply&quot;: result = a * b elif operation == &quot;divide&quot;: if b == 0: raise ValueError(&quot;Cannot divide by zero&quot;) result = a / b else: raise ValueError(f&quot;Unknown operation: {operation}&quot;) return Message( content=FunctionResponseContent( name=&quot;calculate&quot;, response={&quot;result&quot;: result} ), role=MessageRole.AGENT, parent_message_id=message.message_id, conversation_id=message.conversation_id ) elif function_name == &quot;sqrt&quot;: value = float(params.get(&quot;value&quot;, 0)) if value &lt; 0: raise ValueError(&quot;Cannot calculate square root of negative number&quot;) result = math.sqrt(value) return Message( content=FunctionResponseContent( name=&quot;sqrt&quot;, response={&quot;result&quot;: result} ), role=MessageRole.AGENT, parent_message_id=message.message_id, conversation_id=message.conversation_id ) except Exception as e: return Message( content=FunctionResponseContent( name=function_name, response={&quot;error&quot;: str(e)} ), role=MessageRole.AGENT, parent_message_id=message.message_id, conversation_id=message.conversation_id ) if __name__ == &quot;__main__&quot;: agent = CalculatorAgent() run_server(agent, host=&quot;0.0.0.0&quot;, port=5001) </code></pre> <p>Client code:</p> <pre><code>import python_a2a from python_a2a import ( A2AClient, Message, FunctionCallContent, FunctionParameter, MessageRole ) # Card, Message, Task, SendMessageSuccessResponse # class A2ACalcClient: &quot;&quot;&quot; A class to encapsulate the communication with a Python A2A agent. &quot;&quot;&quot; def __init__(self, a2a_url: str): &quot;&quot;&quot; Initializes the A2ACalcClient with the A2A agent's URL. Args: a2a_url: The URL of the Python A2A agent (e.g., &quot;http://127.0.0.1:5001&quot;). &quot;&quot;&quot; self.client = A2AClient(a2a_url) print(f&quot;A2ACalcClient initialized for URL: {a2a_url}&quot;) def send_calculation_request(self, operation: str, a: int, b: int) -&gt; int | None: &quot;&quot;&quot; Sends a function call message to the A2A agent to perform a calculation. Args: operation: The mathematical operation (e.g., &quot;add&quot;, &quot;subtract&quot;, &quot;multiply&quot;, &quot;divide&quot;). a: The first operand. b: The second operand. Returns: The result of the calculation if successful, otherwise None. &quot;&quot;&quot; print(f&quot;Sending calculation request: {operation}(a={a}, b={b})&quot;) function_call_content = FunctionCallContent( name=&quot;calculate&quot;, parameters=[ FunctionParameter(name=&quot;operation&quot;, value=operation), FunctionParameter(name=&quot;a&quot;, value=a), FunctionParameter(name=&quot;b&quot;, value=b) ] ) function_call_message = Message( content=function_call_content, role=MessageRole.USER ) try: response = self.client.send_message(function_call_message) if response.content.type == &quot;function_response&quot;: result = response.content.response.get(&quot;result&quot;) if result is not None: print(f&quot;Received result from A2A agent: {result}&quot;) return result else: print(&quot;Function response received, but 'result' key is missing.&quot;) return None else: print(f&quot;Unexpected response type from A2A agent: {response.content.type}&quot;) print(f&quot;Full response content: {response.content}&quot;) return None except Exception as e: print(f&quot;An error occurred while communicating with the A2A agent: {e}&quot;) return None # --- Example Usage --- if __name__ == &quot;__main__&quot;: # Instantiate the calcAgentClient a2a_url = &quot;http://127.0.0.1:5001&quot; # a2a_url = &quot;http://127.0.0.1:5001/a2a&quot; # Also tried with this URL calcAgentClient = A2ACalcClient(a2a_url) # Perform an addition sum_result = calcAgentClient.send_calculation_request(operation=&quot;add&quot;, a=5, b=3) if sum_result is not None: print(f&quot;Addition Result: {sum_result}&quot;) # Expected: 8 print(&quot;-&quot; * 30) </code></pre>
<python><python-3.x><agent><python-a2a>
2025-08-28 16:30:03
1
2,183
AnilJ
79,749,292
1,719,931
Export Excel timetable spreadsheet to ics calendar
<p>I have made a timetable in Excel, with one row for each event, and columns for the start and end date and time of those event.</p> <p>For instance:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Start</th> <th>End</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>2025-06-07 12:00</td> <td>2024-06-07 14:00</td> <td>Hotel checkin</td> </tr> <tr> <td>2025-06-07 15:00</td> <td>2024-06-07 17:00</td> <td>Visit the museum</td> </tr> <tr> <td>2025-06-07 18:00</td> <td>2024-06-07 20:00</td> <td>Dinner</td> </tr> <tr> <td>2025-06-08 09:00</td> <td>2024-06-08 12:00</td> <td>Go to the sea</td> </tr> </tbody> </table></div> <p>How can I export that timetable in ICS, so that I can import in in calendar softwares like Thunderbird?</p> <p>My doubts are what I should write in the ICS file so that I can import it in a calendar software.</p>
<python><excel><icalendar>
2025-08-28 15:15:19
1
5,202
robertspierre
79,749,291
1,175,788
What would be a suitable way to receive run-time arguments and combine them into an iterable object?
<p>I'm struggling to get runtime arguments and combine them into a list. I'd like for run-time arguments to be like the following <code>python main.py --all</code> or <code>python main.py --endpoint1 --endpoint2</code>. I only have 2 endpoints to worry about right now, possibly more in the future, so I'd like the solution to consider that possibility.</p> <p><code>main.py</code> looks like this at the moment:</p> <pre><code>import api_vendor import argparse if __name__ == &quot;__main__&quot;: parser = argparse.ArgumentParser(description=&quot;Pick args&quot;) parser.add_argument(&quot;--all&quot;, help=&quot;Get endpoint&quot;, action=&quot;store_true&quot;) parser.add_argument(&quot;--endpoint1&quot;, help=&quot;Get endpoint1&quot;, action=&quot;store_true&quot;) parser.add_argument(&quot;--endpoint2&quot;, help=&quot;Get endpoint2&quot;, action=&quot;store_true&quot;) get_all = os.getenv(&quot;GET_ALL&quot;) == &quot;YES&quot; or args.all get_endpoint1 = os.getenv(&quot;GET_ENDPOINT1&quot;) == &quot;YES&quot; or get_all or args.endpoint1 get_endpoint2 = os.getenv(&quot;GET_ENDPOINT2&quot;) == &quot;YES&quot; or get_all or args.endpoint2 av = api_vendor.API() if get_all = TRUE or endpoint1 = TRUE: av.get_endpoint(endpoint1=TRUE) if get_all = TRUE or endpoint2 = TRUE: av.get_endpoint(endpoint2=TRUE) </code></pre> <p>I don't like this since it's kind of messy and starting to look like spaghetti code. Whichever arguments are set, I'd like to loop through and call <code>get_endpoint</code> something like:</p> <pre><code>for endpoint in endpoints: av.get_endpoint(endpoint) </code></pre>
<python><command-line-arguments>
2025-08-28 15:13:45
1
3,011
simplycoding
79,749,250
56
How to use variable tables with SQLAlchemy Core
<p>With SQLAlchemy Core you can run &quot;raw&quot; SQL queries, which can include parameters. However, I can't find an &quot;official&quot; way to use variable identifiers. Take the function <code>get_entity</code> below. It returns a row from a table and key specified by the function arguments.</p> <p>For the moment, I created the <code>_escape_identifier</code> function to do this. However, this only works in Postgres (sqlite3 and MySql probably have their own escaping rules for identifiers). Also: I'm not 100% sure this works in every case.</p> <pre><code>import re, sqlalchemy def _escape_identifier(identifier_name, always_double_quote=False): # WARNING: Works in Postgres only, and is not intended for production code. if not always_double_quote and re.match(r'^[a-z_][a-z0-9$_]*$', identifier_name, re.IGNORECASE): return identifier_name return '&quot;' + identifier_name.replace('&quot;', '&quot;&quot;') + '&quot;' def get_entity(entity_name, key_column, key_value, entity_schema='public'): ee = _escape_identifier query = sqlalchemy.sql.text(f''' SELECT * FROM {ee(entity_schema)}.{ee(entity_name)} WHERE {ee(key_column)} = :key; ''') parameters = dict(key=key_value) # run query with parameters and return data # ... </code></pre> <p>The <code>_escape_identifier</code> function is basically doing the same thing as Posgres' <code>QUOTE_IDENT</code> function. However, that last function is returning <code>text</code>, so it adds another round trip (you gotta call a query on Postgres, and use the result instead of <code>_escape_identifier</code>). Also, it's Postgres specific.</p> <p>The question: is there a way with SQLAlchemy to escape an identifier literal, so it can be used in a &quot;raw&quot; SQL query?</p>
<python><python-3.x><sqlalchemy>
2025-08-28 14:40:11
1
19,370
doekman
79,749,231
856,804
How to type a wrapper function without Any
<p>Is there a way to type the following function without using <code>Any</code>?</p> <pre><code>from typing import Any import tenacity @tenacity.retry def foo_with_retry(*args: Any, **kwargs: Any) -&gt; None: foo(*args, **kwargs) </code></pre> <p>Is <a href="https://docs.python.org/3/library/typing.html#typing.ParamSpec" rel="nofollow noreferrer">ParamSpec</a> applicable here?</p>
<python><python-typing>
2025-08-28 14:27:26
0
9,110
zyxue
79,749,129
11,318,472
Use `curve_fit` with `partial` using named parameters instead of positional parameters
<p>I'm trying to use <code>curve_fit</code> on a function for which I want to freeze one or more parameters using <code>partial</code>.</p> <p>E.g. this pattern, which is working:</p> <pre class="lang-py prettyprint-override"><code>from scipy.optimize import curve_fit from functools import partial # Irrelevant xdata = [0]*5 ydata = xdata def func(x, a, b): return x + a + b pfunc = partial(func, 2) popt, _ = curve_fit(pfunc, xdata, ydata) print(popt) # &gt; [-2.] </code></pre> <p>What I want to do is provide <code>partial</code> with the parameter(s) as named parameters, not positional. This however seems not to work reliably.</p> <pre class="lang-py prettyprint-override"><code>from scipy.optimize import curve_fit from functools import partial # Irrelevant xdata = [0]*5 ydata = xdata def func(x, a, b): return x + a + b pfunc = partial(func, b=2) popt, _ = curve_fit(pfunc, xdata, ydata) print(popt) # &gt; [-2.] pfunc = partial(func, a=2) popt, _ = curve_fit(pfunc, xdata, ydata) print(popt) # &gt; raises ValueError: Unable to determine number of fit parameters. </code></pre> <p>Am I doing something wrong? Is there a way to get this to work? In my application, the <code>func</code> may take n&gt;=3 of which I want to fix up to (n-1).</p>
<python><scipy-optimize>
2025-08-28 13:03:29
1
1,319
euronion
79,749,001
1,022,138
Removing frame/outside of the text from red LED dotted display
<p>I am building a MAUI app and need to preprocess photos of a red dotted LED display so OCR can read the digits. The photos often include a bezel/frame, screw holes around the LEDs. I must remove that noise (top/right/left/bottom depending on the photo) while preserving every LED dot. Many preprocessing attempts either leave frame/screws or destroy the dot grid.</p> <p>I can only use ImageSharp / .NET features (EmguCV / OpenCvSharp are not an option for iOS in my project). Below is the current ImageSharp class I’m using, it roughly follows: grayscale → blur → edge detection → large dilation → threshold → invert. That pipeline could not remove the noise properly.</p> <pre><code> public class ImageSharpOcrProcessor { public static byte[] ProcessImageForOcr( string inputImagePath, byte binaryThreshold = 64 ) { // Load image using var image = Image.Load&lt;Rgba32&gt;( inputImagePath ); return ProcessImageInternal( image, binaryThreshold ); } public static async Task&lt;byte[]&gt; ProcessImageForOcrAsync( string inputImagePath, byte binaryThreshold = 64 ) { return await Task.Run( () =&gt; ProcessImageForOcr( inputImagePath, binaryThreshold ) ); } private static byte[] ProcessImageInternal( Image&lt;Rgba32&gt; image, byte binaryThreshold ) { // Convert to grayscale image.Mutate( x =&gt; x.Grayscale() ); // Gaussian blur (5x5 kernel, sigma = 0 means auto-calculate) image.Mutate( x =&gt; x.GaussianBlur( 2.0f ) ); // radius ~2 approximates 5x5 kernel // Canny edge detection (ImageSharp doesn't have built-in Canny, so we'll use edge detection) //image.Mutate( x =&gt; x.DetectEdges( KnownEdgeDetectorKernels.Sobel ) ); //alternative ApplyCannyLikeEdgeDetection( image, 20f, 80f ); // Dilate (ImageSharp doesn't have built-in morphological operations, so we'll implement) for ( int i = 0; i &lt; 4; i++ ) // Apply dilation twice for stronger effect ApplyDilation( image, 4 ); // Apply binary threshold before inversion (best practice for OCR) ApplyBinaryThreshold( image, binaryThreshold ); // Convert gray pixels to pure black/white // Bitwise NOT (invert) image.Mutate( x =&gt; x.Invert() ); // Encode to PNG bytes using var memoryStream = new MemoryStream(); image.Save( memoryStream, new PngEncoder() ); return memoryStream.ToArray(); } // Alternative enhanced edge detection method (closer to Canny) - OPTIMIZED private static void ApplyCannyLikeEdgeDetection( Image&lt;Rgba32&gt; image, float lowThreshold = 50f, float highThreshold = 150f ) { // Apply Sobel edge detection directly (skip extra Gaussian blur since we already did it) image.Mutate( x =&gt; x.DetectEdges( KnownEdgeDetectorKernels.Sobel ) ); // Apply simplified threshold for better OCR results - OPTIMIZED image.ProcessPixelRows( accessor =&gt; { for( int y = 0 ; y &lt; accessor.Height ; y++ ) { var pixelRow = accessor.GetRowSpan( y ); for( int x = 0 ; x &lt; pixelRow.Length ; x++ ) { var pixel = pixelRow[ x ]; // Use only R channel since image is already grayscale (faster than averaging RGB) float intensity = pixel.R; // Simplified threshold: strong edges become white, everything else black // We'll let the binary threshold step handle final cleanup byte newValue = intensity &gt; lowThreshold ? (byte)255 : (byte)0; pixelRow[ x ] = new Rgba32( newValue, newValue, newValue, 255 ); } } } ); } // Custom dilation implementation for SixLabors.ImageSharp - OPTIMIZED private static void ApplyDilation( Image&lt;Rgba32&gt; image, int kernelSize ) { int width = image.Width; int height = image.Height; var originalPixels = new byte[ width * height ]; // Use 1D array for better performance // First pass: copy original pixels (only R channel since grayscale) image.ProcessPixelRows( accessor =&gt; { for( int y = 0 ; y &lt; height ; y++ ) { var rowSpan = accessor.GetRowSpan( y ); for( int x = 0 ; x &lt; width ; x++ ) { originalPixels[ y * width + x ] = rowSpan[ x ].R; } } } ); int halfKernel = kernelSize / 2; // Apply dilation with optimized kernel image.ProcessPixelRows( accessor =&gt; { for( int y = 0 ; y &lt; height ; y++ ) { var pixelRow = accessor.GetRowSpan( y ); for( int x = 0 ; x &lt; width ; x++ ) { byte maxValue = 0; // Optimized kernel bounds int startY = Math.Max( 0, y - halfKernel ); int endY = Math.Min( height - 1, y + halfKernel ); int startX = Math.Max( 0, x - halfKernel ); int endX = Math.Min( width - 1, x + halfKernel ); // Check kernel area with optimized indexing for( int ny = startY ; ny &lt;= endY ; ny++ ) { for( int nx = startX ; nx &lt;= endX ; nx++ ) { byte pixelValue = originalPixels[ ny * width + nx ]; if( pixelValue &gt; maxValue ) maxValue = pixelValue; } } pixelRow[ x ] = new Rgba32( maxValue, maxValue, maxValue, 255 ); } } } ); } // Binary threshold implementation - converts grayscale to pure black/white for better OCR private static void ApplyBinaryThreshold( Image&lt;Rgba32&gt; image, byte threshold = 128 ) { image.ProcessPixelRows( accessor =&gt; { for( int y = 0 ; y &lt; accessor.Height ; y++ ) { var pixelRow = accessor.GetRowSpan( y ); for( int x = 0 ; x &lt; pixelRow.Length ; x++ ) { var pixel = pixelRow[ x ]; // Use R channel since image is grayscale byte intensity = pixel.R; // Apply binary threshold: anything above threshold becomes white (255), below becomes black (0) byte newValue = intensity &gt; threshold ? (byte)255 : (byte)0; pixelRow[ x ] = new Rgba32( newValue, newValue, newValue, 255 ); } } } ); } } </code></pre> <p>Sample input / output</p> <p>Input image (raw):</p> <p><a href="https://i.sstatic.net/VoRFVQth.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VoRFVQth.jpg" alt="original image" /></a></p> <p>Current output (what my pipeline produces):</p> <p><a href="https://i.sstatic.net/VCfWmTTt.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCfWmTTt.jpg" alt="processed image" /></a></p> <p>I can't rely on a fixed crop (I previously cropped 25% from the top) because images vary in distance/angle/lighting.</p> <p>I have tried several ImageSharp-based approaches (blur → Sobel/edge → dilation → threshold, etc.) but they either leave frame or destroy the LED dot grid.</p> <p>I want a robust method (or a short Python/OpenCV proof-of-concept I can port) that automatically removes bezel/frame/screws from any side (top/right/left/bottom) while preserving all LED dots.</p> <p>And I also try to deskew to straighten the text. But no success as well</p>
<python><c#><opencv><image-processing><imagesharp>
2025-08-28 11:23:59
0
1,638
boss
79,748,854
1,659,599
Not able to get selection of PDF document
<p>With <code>PySide6</code> <code>getSelection</code> returns an invalid <code>QPdfSelection</code>. No matter what arguments we pass to it.</p> <p>We'd expect a valid <code>QPdfSelection</code> with a non-zero <code>boundingRectangle</code>.</p> <p><strong>How to reproduce it</strong></p> <p>We call <code>getAllText</code> on a given <code>QPdfDocument</code> and we get a valid <code>QPdfSelection</code> and its <code>boundingRectangle</code> as expected:</p> <pre class="lang-py prettyprint-override"><code>selection: QPdfSelection = document.getAllText(1) print(selection.isValid(), selection.boundingRectangle()) # True PySide6.QtCore.QRectF(72.000000, 72.000000, 451.000000, 728.000000) </code></pre> <p>But when we call <code>getSelection</code> we get an invalid <code>QPdfSelection</code> with a zero-ed <code>boundingRectangle</code>. No matter what arguments we pass to it (see below).</p> <p>In our example here we use the <code>top_left</code> and <code>bottom_right</code> <code>QPointF</code>s of the <code>boundingRectangle</code> of the <code>QPdfSelection</code> returned by <code>getAllText</code> above:</p> <pre class="lang-py prettyprint-override"><code>top_left: QPointF = selection.boundingRectangle().topLeft() bottom_right: QPointF = selection.boundingRectangle().bottomRight() selection: QPdfSelection = document.getSelection(current_page, top_left, bottom_right) print(selection.isValid(), selection.boundingRectangle()) # False PySide6.QtCore.QRectF(0.000000, 0.000000, 0.000000, 0.000000) </code></pre> <p><strong>Minimal Reproducible Example</strong></p> <pre><code>pip install pyside6==6.9.2 </code></pre> <pre class="lang-py prettyprint-override"><code>from PySide6.QtCore import QPointF from PySide6.QtPdf import QPdfDocument, QPdfSelection document: QPdfDocument = QPdfDocument(None) document.load(&quot;filename.pdf&quot;) current_page: int = 1 print(&quot;# getAllText()&quot;) selection: QPdfSelection = document.getAllText(current_page) print(selection.isValid(), selection.boundingRectangle()) # returns as expected: # True PySide6.QtCore.QRectF(72.000000, 72.000000, 451.000000, 728.000000) top_left: QPointF = selection.boundingRectangle().topLeft() bottom_right: QPointF = selection.boundingRectangle().bottomRight() print(top_left) # PySide6.QtCore.QPointF(72.000000, 72.000000) print(bottom_right) # PySide6.QtCore.QPointF(523.000000, 800.000000) print(&quot;# getSelection()&quot;) selection: QPdfSelection = document.getSelection(current_page, top_left, bottom_right) print(selection.isValid(), selection.boundingRectangle()) # doesn't returns as expected: # False PySide6.QtCore.QRectF(0.000000, 0.000000, 0.000000, 0.000000) </code></pre> <p><strong>Tested on both Linux and Windows</strong></p> <ul> <li>Linux 5.10.0-19-amd64 #1 SMP Debian 5.10.149-1 (2022-10-17) with Python 3.12.11, Qt: v 6.9.2, PyQt: v 6.9.2</li> <li>Linux Debian 12 6.1.0-22-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.94-1 (2024-06-21) with Python 3.11.2, Qt: v 6.9.2, PyQt: v 6.9.2</li> <li>Windows 10 10.0.19045 with Python 3.12.6, Qt: v 6.9.2, PyQt: v 6.9.2</li> </ul> <p>Both with following python packages</p> <pre class="lang-ini prettyprint-override"><code>pyside6==6.9.2 pyside6-addons==6.9.2 pyside6-essentials==6.9.2 shiboken6==6.9.2 </code></pre>
<python><pyside6>
2025-08-28 08:58:37
0
7,359
wolfrevo
79,748,530
1,316,365
Setting global variables for python multiprocessing
<p>I have a large array and an object I'd like to call multiple times with multiprocessing. Neither the data nor the object internals get modified.</p> <p>This works:</p> <pre><code>import numpy as np from multiprocessing import Pool data = np.arange(400).reshape(20, 20) class MyClass(): def __call__(self, indx): return np.sum(data[:, indx]) my_class = MyClass() def call_single_indx(indx): result = my_class(indx) return result def launch_jobs(nmap=10, num_jobs=3): with Pool(processes=num_jobs) as pool: result = pool.map(call_single_indx, range(nmap)) result = np.array(result) return result if __name__ == &quot;__main__&quot;: result = launch_jobs() print(result) </code></pre> <p>But this fails:</p> <pre><code>import numpy as np from multiprocessing import Pool data = None my_class = None def set_globals(n1, n2): global data data = np.arange(n1).reshape(n2, n2) global my_class my_class = MyClass() class MyClass(): def __call__(self, indx): return np.sum(data[:, indx]) def call_single_indx(indx): result = my_class(indx) return result def launch_jobs(nmap=10, num_jobs=3): with Pool(processes=num_jobs) as pool: result = pool.map(call_single_indx, range(nmap)) result = np.array(result) return result if __name__ == &quot;__main__&quot;: set_globals(n1=400, n2=20) launch_jobs() </code></pre> <p>Fails with <code>TypeError: 'NoneType' object is not callable</code>, so <code>set_globals</code> failed to actually change the value of the global variable and it is still None.</p> <p>How can I get my <code>set_globals</code> function to actually set the global variables so that all the processes share them?</p> <p>I'm adding some more text here because the post is mostly code and it won't let me post without more details.</p>
<python><numpy><multiprocessing>
2025-08-27 23:13:51
2
995
I.P. Freeley
79,748,461
5,629,527
how to pass pre-computed folds to successiveHalving in sklearn
<p>I want to undersample 3 cross-validation folds from a dataset, using say, RandomUnderSampler from imblearn, and then, optimize the hyperparameters of various gbms using those undersampled folds as input.</p> <p>The code I have so far is:</p> <pre><code>def train_model_with_undersampling(undersampler, estimator, scale, params, X_train, y_train): # we need the resampler within a pipeline, because we are # using cross-validation to optimize hyperparameters, which # means that we need a left out fold without resampling to # evaluate the model. # The only way is with imblearn's pipeline (see imblearn docs) # annoying cause we need to resample every time if scale is True: pipe = Pipeline([ (&quot;scaler&quot;, MinMaxScaler()), (&quot;sampler&quot;, undersampler), (&quot;model&quot;, estimator), ]) else: pipe = Pipeline([ (&quot;sampler&quot;, undersampler), (&quot;model&quot;, estimator), ]) search = HalvingRandomSearchCV( estimator=pipe, param_distributions=params, n_candidates=&quot;exhaust&quot;, factor=3, # only a third of the candidates are promoted resource='model__n_estimators', # the limiting resource max_resources=500, # max number of trees min_resources=10, scoring='roc_auc', cv=3, random_state=10, refit=True, n_jobs=-1, ) search.fit(X_train, y_train) return search </code></pre> <p>However, this function will undersample the data to tune each gbm model. This is inefficient, because the undersampling is the same.</p> <p>What I would like is to be able to pass to HalvingRandomSearchCV the undersampled folds and the test fold somehow.</p> <p>In short, I want to undersample 3 different folds of X_train, and then be able to use those to optimize the hyperparameters of xgboost, catboost, gradientboostingclassifier and other models.</p> <p>Is there a way to do so?</p>
<python><scikit-learn><hyperparameters><imblearn>
2025-08-27 20:41:58
1
1,134
Sole Galli
79,748,388
2,807,964
It is possible to create a Python package that installs partially?
<p>Recently, I have faced several libraries that support installs like:</p> <pre class="lang-bash prettyprint-override"><code>pip3 install library[all] </code></pre> <p>How can I do it? I'm not finding any documentation regarding that style of installation.</p> <p>Is it related to the <code>setup.cfg</code> configuration or something similar? Because my project is organized as below:</p> <pre class="lang-toml prettyprint-override"><code>[options] packages = my_lib my_lib.connection my_lib.web my_lib.data </code></pre> <p>I want to have the option to install everything or partially. For example:</p> <pre class="lang-bash prettyprint-override"><code>pip3 install my_lib[web] </code></pre>
<python><packaging>
2025-08-27 19:11:53
0
880
jcfaracco
79,748,307
3,817,456
cuda commands from python in opencv fail
<p>I've compiled opencv with cuda support and am trying to use some cuda functions - to that end I'm trying the following test:</p> <pre><code>import cv2 if cv2.cuda.getCudaEnabledDeviceCount() &gt; 0: print(&quot;OpenCV was built with CUDA support.&quot;) else: print(&quot;OpenCV was not built with CUDA support.&quot;) build_info = cv2.getBuildInformation() #print(build_info) print(&quot;\n=== CPU flags ===&quot;) for line in build_info.splitlines(): if &quot;math&quot; in line.lower() or &quot;fast&quot; in line.lower(): print(line) print(&quot;\n=== CUDA section ===&quot;) for line in build_info.splitlines(): if &quot;CUDA&quot; in line or &quot;FAST_MATH&quot; in line: print(line) # Check if a CUDA-enabled GPU is available if not cv2.cuda.getCudaEnabledDeviceCount(): print(&quot;CUDA-enabled GPU not found.&quot;) # Fallback to a CPU implementation if needed exit() # Load an image using CPU # Load an image image = cv2.imread('/home/jeremy/Documents/drone_detect/laser_turret/scout_software/obsidian.jpg') if image is None: print(&quot;Error loading image.&quot;) exit() # Convert to GRAY on CPU (CUDA filter prefers 1 or 4 channels) gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) print(&quot;gray:&quot;, gray.shape, gray.dtype, gray.flags['C_CONTIGUOUS']) # Upload to GPU gpu_image = cv2.cuda_GpuMat() gpu_image.upload(gray) assert isinstance(gpu_image, cv2.cuda_GpuMat) # Create and apply CUDA Gaussian (type = CV_8UC1) # sanity: uploaded image must be 8UC1 and non-empty assert isinstance(gpu_image, cv2.cuda_GpuMat) and not gpu_image.empty() assert gpu_image.type() == cv2.CV_8UC1, f&quot;type={gpu_image.type()} (need CV_8UC1)&quot; gaussian_filter = cv2.cuda.createGaussianFilter(cv2.CV_8UC1, cv2.CV_8UC1, (15, 15), 0) # some Python bindings require an explicit dst GpuMat dst_gpu = cv2.cuda_GpuMat() gaussian_filter.apply(gpu_image, dst_gpu) gpu_blurred_image = dst_gpu # Display or save the blurred image cv2.imshow('Original Image', image) cv2.imshow('Blurred Image', gpu_blurred_image.download()) # this is gray cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p>This invariably hits problems along the lines of</p> <pre><code>OpenCV was built with CUDA support. === CPU flags === C++ flags (Release): -fsigned-char -ffast-math -fno-finite-math-only -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG -DNDEBUG C++ flags (Debug): -fsigned-char -ffast-math -fno-finite-math-only -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g -O0 -DDEBUG -D_DEBUG C flags (Release): -fsigned-char -ffast-math -fno-finite-math-only -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse3 -fvisibility=hidden -O3 -DNDEBUG -DNDEBUG C flags (Debug): -fsigned-char -ffast-math -fno-finite-math-only -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections -msse3 -fvisibility=hidden -g -O0 -DDEBUG -D_DEBUG Unavailable: alphamat cannops cvv fastcv hdf java julia matlab ovis python2 sfm viz NVIDIA CUDA: YES (ver 12.9, CUFFT CUBLAS FAST_MATH) === CUDA section === NVIDIA CUDA: YES (ver 12.9, CUFFT CUBLAS FAST_MATH) gray: (1451, 2194) uint8 True Traceback (most recent call last): File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code File &quot;/home/jeremy/Documents/drone_detect/laser_turret/shared/verify_opencv_cuda.py&quot;, line 53, in &lt;module&gt; gaussian_filter.apply(gpu_image, dst_gpu) cv2.error: OpenCV(4.13.0-dev) :-1: error: (-5:Bad argument) in function 'apply' &gt; Overload resolution failed: &gt; - src is not a numpy array, neither a scalar &gt; - Expected Ptr&lt;cv::cuda::GpuMat&gt; for argument 'src' &gt; - Expected Ptr&lt;cv::UMat&gt; for argument 'src' </code></pre> <p>So what is going wrong here - I have an nividia A1000 gpu and compiled opencv using the following cmake command:</p> <pre><code> cmake -G &quot;Unix Makefiles&quot; -D CMAKE_BUILD_TYPE=Release -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_EXTRA_MODULES_PATH=$HOME/opencv_contrib/modules -D BUILD_opencv_highgui=ON -D WITH_GTK=ON -D WITH_OPENGL=ON -D WITH_CUDA=ON -D WITH_CUDNN=ON -D WITH_CUBLAS=ON -D OPENCV_DNN_CUDA=ON -D CUDA_ARCH_BIN=8.6 -D CUDAToolkit_ROOT=/usr/local/cuda -D BUILD_opencv_cudacodec=ON -D WITH_NVCUVENC=OFF -D BUILD_EXAMPLES=ON -D BUILD_PYTHON_SUPPORT=ON -D INSTALL_PYTHON_EXAMPLES=ON -D BUILD_TESTS=ON -D BUILD_PERF_TESTS=OFF -D BUILD_opencv_python3=ON -D Python3_EXECUTABLE=&quot;$PYEXE&quot; -D Python3_NumPy_INCLUDE_DIRS=&quot;$NUMPY_INC&quot; -D OPENCV_PYTHON3_INSTALL_PATH=/home/jeremy/Documents/drone_detect/myenv/lib/python3.12/site-packages -D OPENCV_PYTHON3_LIMITED_API=OFF -D CUDA_FAST_MATH=ON -D ENABLE_FAST_MATH=ON .. </code></pre>
<python><opencv><gpu>
2025-08-27 17:36:16
0
6,150
jeremy_rutman
79,748,241
7,036,941
Silent crash when adding emoji to button label in PySide6 / Qt Creator
<p>I'm using Qt Creator to design the GUI for an app. I wanted to add some emoji to the button labels but the app silently crashes when I do. I followed the error to the <code>retranslateUI()</code> function (which is autogenerated code, when generating a <code>py</code> file from a <code>ui</code> one with <code>pyside6-uic</code>). On it, labels are set in my widgets. The autogenerated code uses escape sequences to represent my emoji, and <code>QCoreApplication.translate</code> seems to not accept it. If I replace the hex codes for the emoji, the app launches, but this is autogenerated code, so I can't edit it as the next time the original file changes, my changes will be overwritten.</p> <p>I'm using PySide6 (v6.9.1) and Python 3.13 on Windows 11.</p> <p>I managed to replicate the issue with this code snippet:</p> <pre class="lang-py prettyprint-override"><code>import sys from PySide6.QtWidgets import QApplication, QMainWindow, QPushButton from PySide6.QtCore import QCoreApplication class Window(QMainWindow): def __init__(self): super().__init__() self.setGeometry(500, 500, 400, 200) self.setWindowTitle(&quot;Button&quot;) button = QPushButton(self) # When I generate a `py` file from the `ui` one, `setupUI()` is called, which # eventually calls `retranslateUI()`, which sets titles, labels, etc... # When it's time to set the label for the button, the code autogenerated uses # the UTF hex codes for the emoji, and the app crashes here button.setText(QCoreApplication.translate(&quot;MainWindow&quot;, &quot;click me \ud83e\udd23\ufe0f&quot;, None)) # this would crash the app # But using emoji in the string seems to work fine # button.setText(QCoreApplication.translate(&quot;MainWindow&quot;, &quot;click me 🤣️&quot;, None)) # this works button.setGeometry(100, 75, 200, 50) button.clicked.connect(self.click) self.show() def click(self): pass if __name__ == &quot;__main__&quot;: app = QApplication(sys.argv) ui = Window() sys.exit(app.exec_()) </code></pre> <p>I think it's an issue with <code>QCoreApplication.translate()</code>, as labelling the button with either emoji or escape sequences create buttons with emoji on them</p> <pre class="lang-py prettyprint-override"><code> # Both these options work, either using an emoji # or the UTF code for the emoji, both create a button with an emoji on it button1 = QPushButton(&quot;click me 🤣️&quot;, self) # this works button2 = QPushButton(&quot;click me \ud83e\udd23\ufe0f&quot;, self) # this works too </code></pre> <p>Does anybody know how to get emoji as part of the label on a button widget?</p>
<python><qt><user-interface><pyside6>
2025-08-27 16:27:10
1
408
Joel Santos Rico
79,748,162
38,666
Using SQLAlchemy 2.0 to scope database transactions to pytest modules via Postgres
<p>I am working to migrate an older codebase from SQLAlchemy 1.3 to 2.0. I successfully migrated to 1.4, but I am running into trouble with the testing suite. The tests are using Postgres SAVEPOINTs and monkeypatching <code>session.commit</code> =&gt; <code>session.flush</code> to add two nested layers of isolation:</p> <ul> <li>Root level tests just get standard &quot;each test method is wrapped in a separate transaction, rolled back at the end of the test&quot; <a href="https://docs.sqlalchemy.org/en/14/orm/session_transaction.html#joining-a-session-into-an-external-transaction-such-as-for-test-suites" rel="nofollow noreferrer">per the official recommendations</a></li> <li>Tests in modules, however, are in nested transactions with some monkeypatching that prevents the outer transaction from getting closed (FWIW the <code>package_session</code> logic is basically identical to the root-level <code>session</code> fixture, except with the monkeypatch): <pre><code>@pytest.fixture(scope=&quot;package&quot;) def package_session(test_engine: Engine, monkeypatch_package) -&gt; Session: # Create a nested transaction that includes standard module data connection = test_engine.connect() transaction = connection.begin() session = Session(bind=connection) # Overwrite commits with flushes so that we can query stuff, but it's in the same transaction monkeypatch_package.setattr(session, &quot;commit&quot;, session.flush) # Create our data that is shared by the tests in this module _create_data_for_test(session) try: yield session finally: transaction.rollback() connection.close() @pytest.fixture(scope=&quot;function&quot;) def session(package_session): &quot;&quot;&quot;Return a nested transaction on the outer session, to prevent rolling back shared data&quot;&quot;&quot; package_session.begin_nested() try: yield package_session finally: package_session.rollback() </code></pre> </li> </ul> <p>Under SQLAlchemy 1.4 this works, but upon upgrading to 2.0 the test suite is littered with errors about missing shared data (no low-level SQLAlchemy errors, though, unlike when I updated this pattern from 1.3 to 1.4).</p> <p>I've been fighting this for a couple days now, so I figured I'd ask: is this pattern just not possible in 2.0? I could populate shared data in the &quot;function&quot; scoped session if I had to (just will slow the test suite down), but I figured I'd ask if anyone can spot something obvious that I'm doing wrong, since I liked the pattern under 1.3/1.4.</p>
<python><postgresql><sqlalchemy><pytest>
2025-08-27 15:24:36
2
19,277
One Crayon
79,748,065
1,145,666
Why is _cp_dispatch not routing to another method?
<p>I have this CherryPy object to process some simple REST API, but I fail to understand how <code>_cp_dispatch</code> is supposed to work.</p> <pre><code>class Products(object): def __init__(self, product_database): self.product_database = product_database def _cp_dispatch(self, vpath): # batch/ if len(vpath) == 1 and vpath[0] == &quot;batch&quot;: vpath[0] = &quot;realbatch&quot; return self # batch/{id}/product/ if len(vpath) == 3 and vpath[0] == &quot;batch&quot;: vpath.pop(0) # pop &quot;batch&quot; cherrypy.request.params['batch_id'] = vpath.pop(0) return self # batch/{id}/product/{sku}/ if len(vpath) == 4 and vpath[0] == &quot;batch&quot; and vpath[2] == &quot;product&quot;: vpath.pop(0) # pop &quot;batch&quot; cherrypy.request.params['batch_id'] = vpath.pop(0) vpath.pop(0) # pop &quot;product&quot; cherrypy.request.params['sku'] = vpath[0] vpath[0] = &quot;product&quot; return self # defaults return self # POST rest-products/batch --&gt; create a batch (name in JSON body), return batch id # GET rest-products/batch --&gt; return list of batches @cherrypy.expose @cherrypy.tools.json_out() def realbatch(self): # this method is not called when calling &quot;batch/&quot; pass # # POST rest-products/batch/{id}/product --&gt; add product to batch (product data in JSON body) @cherrypy.expose @cherrypy.tools.json_out() def product(self, batch_id, sku = None): # this method is called when calling &quot;batch/{id}/product/&quot; or &quot;batch/{id}/product/{sku} pass </code></pre> <p>As how I understand it, CherryPy tries to find a match on the path (which fails, because it cannot find a <code>batch</code> method). Then it tries to find a catch-all <code>index()</code> which also fails, so it goes to <code>_cp_dispatch</code>. In my case, for <code>batch/</code> to rewrites it into <code>realbatch/</code>, and tries again. However, that doesn't happen.</p> <p>I can fix it by simply renaming <code>realbatch</code> to <code>index</code>, but then calls to <code>batc/</code> work as well, which I don't want.</p> <p>My goal is simple: route <code>batch/</code> to the <code>realbatch()</code> method and <code>batch/{id}/product/{sku}</code> to <code>product()</code>. All other paths should return 404.</p>
<python><routes><cherrypy>
2025-08-27 13:44:30
2
33,757
Bart Friederichs
79,747,670
633,961
error: "get" is not a known attribute of "None" (reportOptionalMemberAccess)
<p>I use PyRight to check my Django code. It complains about that</p> <pre class="lang-py prettyprint-override"><code> class FooForm(ModelForm): def clean(self): cleaned_data = super().clean() start = cleaned_data.get(&quot;start_date&quot;) # &lt;------- here </code></pre> <blockquote> <p>error: &quot;get&quot; is not a known attribute of &quot;None&quot; (reportOptionalMemberAccess)</p> </blockquote> <p>I installed <code>django-stubs</code>.</p> <p>The Django ORM magic gets detected. Example:</p> <pre class="lang-py prettyprint-override"><code>foo = Foo.objects.get(id=123) foo. --&gt; Autocomplete works. I see the attributes of a foo instance. </code></pre> <p>Both (vscode and PyRight cli-tool) understand the above snippet. It works in most cases, but not for <code>cleaned_data</code>.</p> <p>PyRight complains about <code>start = cleaned_data.get(&quot;start_date&quot;)</code> but vscode does not.</p> <p>How to make PyRight aware that cleaned_data is a dictionary?</p>
<python><django><python-typing><pyright>
2025-08-27 07:34:49
1
27,605
guettli
79,747,645
1,482,820
Simple_Salesforce Python Connected App Sandbox
<p>I am trying to use simple-salesforce to connect to a sandbox.</p> <p>Refer: <a href="https://simple-salesforce.readthedocs.io/en/stable/user_guide/examples.html" rel="nofollow noreferrer">https://simple-salesforce.readthedocs.io/en/stable/user_guide/examples.html</a></p> <p>I can successfully connect with a username/password/token and domain=test.</p> <pre><code>sf = Salesforce(username='???',password='???',security_token='???',domain='test') </code></pre> <p>Now, I am moving to a connected app ... my problem is that the documentation says I can use:</p> <pre><code>sf = Salesforce(consumer_key='???', consumer_secret='???', domain='???.my') </code></pre> <p>Then it states &quot;If you’d like to enter a sandbox, simply add domain='test' to your Salesforce() call.&quot;</p> <p>So, that would now be:</p> <pre><code>sf = Salesforce(consumer_key='???', consumer_secret='???', domain='test') </code></pre> <p>But there is now nothing to tell it my instance and if I pass that, then I get:</p> <p>SalesforceAuthenticationFailed: Authentication failed (code: INVALID AUTH): You must submit either a security token or organizationId for authentication</p> <p>I tried adding the organizationId (both 15 and 18 digits), but I still got the error.</p> <p>Next, I tried adding the full domain rather than test, which was:</p> <pre><code>sf = Salesforce(consumer_key='???', consumer_secret='???', domain='???.sandbox.my') </code></pre> <p>Plus, I also tried:</p> <pre><code>sf = Salesforce(username='???', consumer_key='???', consumer_secret='???', domain='???.sandbox.my') sf = Salesforce(username='???', organizationId='???', consumer_key='???', consumer_secret='???', domain='???.sandbox.my') sf = Salesforce(organizationId='???', consumer_key='???', consumer_secret='???', domain='???.sandbox.my') </code></pre> <p>I don't know how this works, but I can't find it in the documentation.</p> <p>Has anyone done this successfully? Am I missing something basic here?</p>
<python><salesforce><simple-salesforce><salesforce-conntected-apps>
2025-08-27 07:10:53
0
314
G-Man
79,747,603
633,961
Activate virtualenv in .envrc in poetry managed project
<p>I want to activate a Python virtual env by changing into a directory.</p> <p>This should activate the virtual env:</p> <pre><code>cd ~/projects/myproject </code></pre> <p>The project gets managed with <code>poetry</code>.</p> <p>Background: I am too lazy to type <code>poetry run mycommand</code>. I want to type only <code>mycommand</code>.</p>
<python><virtualenv><python-poetry><direnv>
2025-08-27 06:25:10
1
27,605
guettli
79,747,317
252,873
How to sort pandas groups by (multiple/all) values of the groups?
<p>I am trying to do a somewhat complicated group and sort operation in pandas. <strong>I want to sort the groups by their values in ascending order, using successive values for tiebreaks as needed.</strong></p> <p>I have read the similar question <a href="https://stackoverflow.com/questions/68250141/pandas-dataframe-how-to-sort-groups-by-the-earliest-time-of-a-group">Pandas dataframe: How to sort groups by the earliest time of a group</a>, but that only uses the minimum value in each group for sorting and thus doesn't handle the case where two groups have the same minimum value but differ in their other values.</p> <p>Likewise, <a href="https://stackoverflow.com/questions/27842613/pandas-groupby-then-sort-within-groups">pandas groupby, then sort within groups</a> discusses <strong>within group</strong> sorting, but not <strong>between group</strong> sorting, which is what I'm after.</p> <p>As a concrete example, consider the following dataframe:</p> <pre class="lang-python prettyprint-override"><code>df = pd.DataFrame({&quot;pool&quot;: [5, 1, 9, 9, 5, 7, 7, 7, 9, 1, 5], &quot;arrival&quot;:[227, 60, 60, 88, 55, 55, 276, 46, 46, 35, 35]}) </code></pre> <p>I want to sort the pools by arrival, such that the resulting dataframe is:</p> <pre class="lang-none prettyprint-override"><code> pool arrival 10 5 35 4 5 55 0 5 227 9 1 35 1 1 60 7 7 46 5 7 55 6 7 276 8 9 46 2 9 60 3 9 88 </code></pre> <p>I have been able to accomplish this via the following code:</p> <pre class="lang-python prettyprint-override"><code># create column to indicate order of values in each group df = df.sort_values(&quot;arrival&quot;) df[&quot;order&quot;] = df.groupby(&quot;pool&quot;)[&quot;arrival&quot;].cumcount() # use 'order' column to make columns for each arrival position df[&quot;first&quot;] = df[&quot;second&quot;] = df[&quot;third&quot;] = np.nan df.loc[df[&quot;order&quot;] == 0,&quot;first&quot;] = df.loc[df[&quot;order&quot;] == 0,&quot;arrival&quot;] df.loc[df[&quot;order&quot;] == 1,&quot;second&quot;] = df.loc[df[&quot;order&quot;] == 1,&quot;arrival&quot;] df.loc[df[&quot;order&quot;] == 2,&quot;third&quot;] = df.loc[df[&quot;order&quot;] == 2,&quot;arrival&quot;] # propagate the values to every member of the group df[[&quot;first&quot;,&quot;second&quot;,&quot;third&quot;]] = df.groupby(&quot;pool&quot;)[[&quot;first&quot;,&quot;second&quot;,&quot;third&quot;]].transform(&quot;max&quot;) # for groups with less than three members, fill the values with previous ones df[&quot;second&quot;] = df[&quot;second&quot;].fillna(df[&quot;first&quot;]) df[&quot;third&quot;] = df[&quot;third&quot;].fillna(df[&quot;second&quot;]) # sort by the arrival position columns, then drop all the helper columns df = df.sort_values([&quot;first&quot;,&quot;second&quot;,&quot;third&quot;,&quot;pool&quot;]).drop(columns=[&quot;first&quot;,&quot;second&quot;,&quot;third&quot;,&quot;order&quot;]) </code></pre> <p>It works, but it's not particularly scalable to pools with larger numbers of arrivals (20+). I'm convinced there has to be a better way, but I can't figure out how to do it.</p> <p>I also tried combining the <code>transform</code> and <code>nth</code> functions as discussed in <a href="https://stackoverflow.com/questions/55891019/using-transform-together-with-nth">Using transform together with nth</a> but, contrary to the accepted answer on that question, trying to pass <code>&quot;nth&quot;</code> to <code>groupby.transform</code> raises <code>ValueError: 'nth' is not a valid function name for transform(name)</code> since <code>nth</code> may return <a href="https://github.com/pandas-dev/pandas/issues/45986#issuecomment-1336597466" rel="noreferrer">none or multiple values for a given group</a> and transform can't handle those cases.</p>
<python><pandas><dataframe><sorting><group-by>
2025-08-26 20:54:39
3
1,813
Jessica
79,747,123
1,107,474
Python websocket library, how to print handshake?
<p>I am using the Python <code>websocket</code> library. I need to see the handshake payload.</p> <p>My code effectively looks like this:</p> <pre class="lang-py prettyprint-override"><code>from websocket import create_connection ws = create_connection(&quot;the websocket url&quot;) </code></pre> <p>I found:</p> <pre class="lang-py prettyprint-override"><code>ws.enableTrace(True) </code></pre> <p>Unfortunately I cannot put it before the connection/object is created and afterwards is too late.</p> <p>I tried adding this as well:</p> <pre class="lang-py prettyprint-override"><code>import logging logging.basicConfig( format=&quot;%(asctime)s %(message)s&quot;, level=logging.DEBUG, ) </code></pre> <p>Unfortunately, it did not do anything.</p> <p>Any solutions?</p>
<python><websocket>
2025-08-26 17:04:46
1
17,534
intrigued_66
79,747,037
2,637,604
VScode does not use the correct Python interpreter
<p>On ubuntu 24, I have installed anaconda and created a virtual environment myenv. I can see myenv when I search for a python interpreter with VScode, I select it, and in the dedicated terminal I activate the environment and check the python version</p> <pre><code>conda activate myenv which python </code></pre> <p>but when I click on the VS code to run code (the top right button with the arrow), it keeps using the link:</p> <pre><code>/bin/python3 </code></pre> <p>I have tried the following json configurations in a local .vscode repo:</p> <pre><code> .vscode/settings.json { &quot;python.defaultInterpreterPath&quot;: &quot;/home/path/anaconda3/envs/myenv/bin/python&quot; } .vscode/launch.json { &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python: Current File&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot; } ] } </code></pre> <p>closing and reopening, none of these works. I am quite puzzled, given that python, virtual environment and VScode are quite standard tools, I expected it to be straightforward. Am I missing a point?</p>
<python><visual-studio-code><anaconda>
2025-08-26 15:51:34
0
1,792
Xavier Prudent
79,746,907
13,845,688
No model seems to support supports_parallel_function_calling to LiteLLM
<p>I'm using <strong>LiteLLM</strong> in a Python project and I'm testing support for <em>parallel function calling</em> on different models.<br /> Here’s a minimal reproducible example:</p> <pre class="lang-py prettyprint-override"><code>from litellm import completion from litellm.utils import supports_function_calling, supports_parallel_function_calling models = [ &quot;gpt-5&quot;, &quot;gpt-5-mini&quot;, &quot;gpt-4-turbo-preview&quot;, &quot;gpt-4o&quot;, &quot;gpt-3.5-turbo-1106&quot; ] for model in models: print(f&quot;Model: {model}&quot;) print(&quot; Supports function calling:&quot;, supports_function_calling(model)) print(&quot; Supports parallel function calling:&quot;, supports_parallel_function_calling(model)) print() </code></pre> <p>Output I get:</p> <pre class="lang-yaml prettyprint-override"><code>Model: gpt-5 Supports function calling: True Supports parallel function calling: False Model: gpt-5-mini Supports function calling: True Supports parallel function calling: False Model: gpt-4-turbo-preview Supports function calling: True Supports parallel function calling: False Model: gpt-4o Supports function calling: True Supports parallel function calling: False Model: gpt-3.5-turbo-1106 Supports function calling: True Supports parallel function calling: False </code></pre> <p>According to the <a href="https://docs.litellm.ai/docs/completion/function_call" rel="nofollow noreferrer">official LiteLLM docs</a>, you can do:</p> <pre class="lang-py prettyprint-override"><code>assert litellm.supports_parallel_function_calling(model=&quot;gpt-4-turbo-preview&quot;) == True </code></pre> <p>So I expected <code>supports_parallel_function_calling(&quot;gpt-4-turbo-preview&quot;)</code> to return <strong>True</strong>, but in my test it always returns <strong>False</strong>.</p> <p>LiteLLM version: <code>1.76.0</code></p> <p>The problem does not change if the provider is specified in the model name (<code>chatgpt-x</code> to <code>openai/chatgpt-x</code>). In the official doc it often talks about <code>gpt-3.5-turbo-1106</code> but that doesn't work for me either.</p>
<python><openai-api><litellm>
2025-08-26 13:39:13
0
308
EnzoDeg40
79,746,620
7,456,317
LlamaIndex Python: Metadata filter with `None` value does not retrieve documents
<p>I’m working with <strong>LlamaIndex</strong> in Python and ran into an issue with metadata filtering.</p> <p>I have a <code>TextNode</code> that includes a metadata field explicitly set to <code>None</code>. When I try to retrieve it using a metadata filter where value is <code>None</code>, no documents are returned. I expected that documents with <code>None</code> metadata would match such a filter.</p> <p>Here's an MRE:</p> <pre class="lang-py prettyprint-override"><code>from llama_index.core import VectorStoreIndex from llama_index.core.schema import TextNode from llama_index.core.vector_stores import ( MetadataFilter, MetadataFilters, FilterOperator, ) node_01 = TextNode( text=&quot;This document has None in the metadata&quot;, id_=&quot;node_01&quot;, metadata={&quot;start_date&quot;: None}, ) doc_index = VectorStoreIndex([node_01]) # Debug: Check what's actually stored print(&quot;Index nodes:\n&quot;, [node.metadata for node in doc_index.docstore.docs.values()]) filter_null_start_date = MetadataFilter(key=&quot;start_date&quot;, operator=FilterOperator.EQ, value=None) filters = MetadataFilters(filters=[filter_null_start_date]) retriever = doc_index.as_retriever(filters=filters, similarity_top_k=1) nodes = retriever.retrieve(&quot;this&quot;) print(&quot;Retrieved nodes:\n&quot;, [(node.node_id, node.metadata) for node in nodes]) </code></pre> <p>Output:</p> <pre><code>Index nodes: [{'start_date': None}] Retrieved nodes: [] </code></pre> <p>So even though the metadata is stored as <code>{'start_date': None}</code>, filtering with EQ value=None does not return the node.</p> <p>My questions:</p> <ul> <li>Is this the expected behavior in LlamaIndex (i.e., None metadata is not filterable)?</li> <li>If so, what is the recommended way to index “null” metadata values so they can be retrieved via filters?</li> </ul> <p>Any clarification or workaround would be appreciated.</p>
<python><llama-index>
2025-08-26 09:03:43
1
913
Gino
79,746,618
7,465,516
Number of arguments was 3 and is now 3 -- Strange Pylint(W0221:arguments-differ) on overriding log_message of BaseHTTPRequestHandler
<p>When I try to override the logging behaviour in my <code>BaseHTTPRequestHandler</code>-subclass like this:</p> <pre class="lang-py prettyprint-override"><code>from http.server import BaseHTTPRequestHandler class MyHandler(BaseHTTPRequestHandler): def log_message(self, fmt, *args): pass </code></pre> <p>I get this warning</p> <blockquote> <p>&quot;Number of parameters was 3 in 'BaseHTTPRequestHandler.log_message' and is now 3 in overriding 'MyHandler.log_message' method <a href="https://pylint.readthedocs.io/en/latest/user_guide/messages/warning/arguments-differ.html" rel="nofollow noreferrer">Pylint(W0221:arguments-differ)</a>&quot;</p> </blockquote> <p>This is strange because the number of arguments is obviously correct, as the message itself states. Is the varargs (<code>*args</code>) causing a problem with inheritance here?</p>
<python><pylint>
2025-08-26 09:03:22
1
2,196
julaine
79,746,460
3,381,858
CFAST + zero out complete media
<p>I have a SATA CFAST media of 32GB. And a bootloader (which creates 4 partitions) and copies some files to 3 partitions, like copying in 1,2, and 4 partitions, and the 3rd partition is empty.</p> <p>After this, my next task is to burn the same media with a different bootloader (which creates 3 partitions) and write files in the newly created partitions. For that, I need to zero the complete media, which usually takes approx 20mins.</p> <p>Now, instead of zeroing the complete media of 32GB, the first sector of the media is zeroed (to save time). Since the Bootloader and MBR information are in the first 512 bytes of the first sector.</p> <p>So, Using a Python program and using the OS routines of Python, the media is opened in binary + read-write mode <code>os.open(self._device_name, os.O_WRONLY | os.O_BINARY)</code> and a buffer of 512 bytes (builtins.pyi), which creates zero bytes, written on the first 512 bytes.</p> <p>But during the writing of the second bootloader and during writing to the third partition, an error was generated</p> <pre><code>error: 0x7f2bb725 msg: ' Could not write 7 blocks in inode table starting at 2458106: Attempt to write block from filesystem resulted in short write' </code></pre> <p>I know I am playing with the MBR; any single wrong step, and the media will not be bootable.</p> <p>I have a question:</p> <ol> <li><p>When partitions are created, is some information written on the starting cylinder position or in that partition that resulted in this error?</p> </li> <li><p>Is there another way to quickly zero the complete media?</p> </li> </ol> <p>OS: Windows 11</p> <p>Python: python 3.13.6</p>
<python><storage>
2025-08-26 06:03:53
0
569
atulya
79,746,431
13,825,658
Why does mypy issubclass type narrowing only work on "type" instances?
<p>Code snippet:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any class MyClass: pass def f(o: Any) -&gt; None: if isinstance(o, type) and issubclass(o, MyClass): reveal_type(o) # Revealed type is &quot;Type[MyClass]&quot; if issubclass(o, MyClass): reveal_type(o) # Revealed type is &quot;Any&quot; </code></pre> <p>Is it a bug in mypy or is there a reason why the 2nd example is not expected to work?</p>
<python><python-typing><mypy>
2025-08-26 05:11:05
1
1,368
Leonardus Chen
79,746,389
9,191,983
Python snowpark print()/.show() is not working
<p>I tried to create snowpark program in snowsight. There's already some code example showing when I add a sheet as following:</p> <pre><code>import snowflake.snowpark as snowpark from snowflake.snowpark.functions import col def main(session: snowpark.Session): # Your code goes here, inside the &quot;main&quot; handler. tableName = 'information_schema.packages' dataframe = session.table(tableName).filter(col(&quot;language&quot;) == 'python') # Print a sample of the dataframe to standard output. dataframe.show() # Return value will appear in the Results tab. return dataframe </code></pre> <p>I set correct database and schema, role and WH as well, but the error occurs no matter what I tried to <code>.show()/print()</code>.</p> <blockquote> <p>No log output to display. Use commands like .show() or print() to write log output.</p> </blockquote> <p>Even <code>print(&quot;Hello, Snowsight!&quot;)</code> causes the same error. I don't know what's happening and I confirmed that my environment already meets the requirements the official document provide. Appreciate any advice which will help me. Thanks.</p>
<python><dataframe><snowflake-cloud-data-platform>
2025-08-26 03:50:35
1
623
user9191983
79,746,308
5,483,457
Extending sympy variable definition to binary
<p>I am trying to extend the symbol class to a binary variable, where <code>+</code> would mimic an XOR operator such that</p> <p>x + y -&gt; x + y</p> <p>x + y + x -&gt; y</p> <p>x + x -&gt; 0</p> <p>I have referenced the this code <a href="https://stackoverflow.com/questions/63587691/sympy-binary-variable-definition">here</a> and added <code>_eval_add</code> but it doesn't seem to call the function.</p> <p>This is my code:</p> <pre><code>from sympy import Symbol, Integer class XorBinary(Symbol): def _eval_power(self, other): if other % 2 == 0: return Integer(1) elif self == other: return Integer(0) else: return self def _eval_add(self, other): print(&quot;add&quot;, self, other) if self == other: return Integer(0) return self # Example usage x = XorBinary(&quot;x&quot;) y = XorBinary(&quot;y&quot;) print(&quot;x^2 =&quot;, x**2) # -&gt; 1 print(&quot;x^3 =&quot;, x**3) # -&gt; x print(&quot;x^0 =&quot;, x**0) # -&gt; 1 print(&quot;x*x =&quot;, x*x) # -&gt; x print(&quot;x+x =&quot;, x + x) # -&gt; x print(&quot;x+y =&quot;, x+y) # normal sum (unless XOR enforced) print(&quot;x+y+x =&quot;, x+y+x)# -&gt; y </code></pre>
<python><sympy>
2025-08-26 00:57:05
2
496
albusSimba
79,746,284
4,996,797
One liner for printing python's path
<p>I am trying to put in my Bash script a one-liner that would print my python's path</p> <pre class="lang-bash prettyprint-override"><code>python -c 'import sys; for p in sys.path: print(p)' </code></pre> <p>the <code>for</code> keyword is flagged as an invalid syntax</p> <p>I was expecting that the semicolon would work as a new line character, yet it disappointed me.</p>
<python><python-import>
2025-08-26 00:09:22
2
408
Paweł Wójcik
79,746,181
20,591,261
How to display Polars list columns in NiceGUI
<p>I’m migrating from Streamlit to NiceGUI and noticed a difference in how DataFrames with list columns are handled.</p> <p>With Streamlit:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl import streamlit as st df = pl.DataFrame({ &quot;a&quot;: [1, 2, 3], &quot;b&quot;: [4, 5, 6], &quot;c&quot;: [[1, 2], [3, 4], [5, 6]], }) st.write(df) </code></pre> <p>Column c is shown as a list: <code>[1, 2], [3, 4], [5, 6]</code>.</p> <p>With NiceGUI:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl from nicegui import ui df = pl.DataFrame({ &quot;a&quot;: [1, 2, 3], &quot;b&quot;: [4, 5, 6], &quot;c&quot;: [[1, 2], [3, 4], [5, 6]], }) ui.table.from_polars(df) ui.run() </code></pre> <p>Column c gets concatenated into a string instead of being displayed as a list.</p> <p>So ,i’m looking for is a way to achieve a result similar to what I get with my Streamlit script.</p> <p>PD: The transformation using the method <code>from_polars()</code> make the program really slow.</p>
<python><python-polars><nicegui>
2025-08-25 20:46:34
2
1,195
Simon
79,746,113
856,804
Python injector doesn't work for NewType based on tuple[str, str]
<p>I'm trying to use <a href="https://github.com/python-injector/injector" rel="nofollow noreferrer">injector</a> to inject a <code>tuple[str, str]</code>, but it doesn't work.</p> <pre><code>import injector from typing import NewType Foo = NewType(&quot;Foo&quot;, tuple[str, str]) class MyModule(injector.Module): def configure(self, binder: injector.Binder) -&gt; None: binder.bind(Foo, Foo((&quot;x&quot;, &quot;y&quot;))) injector.Injector(modules=[MyModule]).get(Foo) </code></pre> <p>leads to error</p> <pre><code>Traceback (most recent call last): File &quot;/tmp/injector_demo/scratch.py&quot;, line 12, in &lt;module&gt; injector.Injector(modules=[MyModule]).get(Foo) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/injector_demo/.venv/lib/python3.12/site-packages/injector/__init__.py&quot;, line 925, in __init__ self.binder.install(module) File &quot;/tmp/injector_demo/.venv/lib/python3.12/site-packages/injector/__init__.py&quot;, line 564, in install instance(self) File &quot;/tmp/injector_demo/.venv/lib/python3.12/site-packages/injector/__init__.py&quot;, line 871, in __call__ self.configure(binder) File &quot;/tmp/injector_demo/scratch.py&quot;, line 9, in configure binder.bind(Foo, Foo((&quot;x&quot;, &quot;y&quot;))) File &quot;/tmp/injector_demo/.venv/lib/python3.12/site-packages/injector/__init__.py&quot;, line 465, in bind self._bindings[interface] = self.create_binding(interface, to, scope) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/injector_demo/.venv/lib/python3.12/site-packages/injector/__init__.py&quot;, line 569, in create_binding provider = self.provider_for(interface, to) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/injector_demo/.venv/lib/python3.12/site-packages/injector/__init__.py&quot;, line 631, in provider_for raise UnknownProvider('couldn\'t determine provider for %r to %r' % (interface, to)) injector.UnknownProvider: couldn't determine provider for __main__.Foo to ('x', 'y') </code></pre> <p>I've been injecting <code>NewType('Bar', str)</code> many times and they all work, so it has to be due to something special related to <code>tuple[str, str]</code>. I wonder if anyone has insights into why.</p>
<python><dependency-injection><python-typing>
2025-08-25 19:13:29
1
9,110
zyxue
79,745,977
11,741,232
Starting multiprocessing.Process() in a Pytest test/Python creates a Windows fatal exception: access violation
<p>On windows, running the below code with Python or pytest makes it print out <code>Windows fatal exception: access violation</code> (but the script will continue with no issue)</p> <p>Reproduction:</p> <pre class="lang-py prettyprint-override"><code>import multiprocessing as mp import faulthandler faulthandler.enable() def dummy_function(): import os print(f&quot;Worker PID: {os.getpid()}&quot;) print(&quot;Worker process completed&quot;) def test_reproduce(): p = mp.Process(target=dummy_function) p.start() p.join() print(f&quot;Process exit code: {p.exitcode}&quot;) if __name__ == &quot;__main__&quot;: print(&quot;Direct run:&quot;) test_reproduce() print(&quot;Success&quot;) </code></pre> <p>Trace:</p> <pre><code>Windows fatal exception: access violation Current thread 0x00000f48 (most recent call first): File &quot;C:\Python\cpython-3.10.16-windows-x86_64-none\lib\multiprocessing\popen_spawn_win32.py&quot;, line 73 in __init__ File &quot;C:\Python\cpython-3.10.16-windows-x86_64-none\lib\multiprocessing\context.py&quot;, line 336 in _Popen File &quot;C:\Python\cpython-3.10.16-windows-x86_64-none\lib\multiprocessing\context.py&quot;, line 224 in _Popen File &quot;C:\Python\cpython-3.10.16-windows-x86_64-none\lib\multiprocessing\process.py&quot;, line 121 in start File &quot;C:\src\sandbox\minimal.py&quot;, line 5 in test_reproduce </code></pre> <p>This points to the <code>_winapi.CreateProcess()</code> function.</p> <p>Note that if <code>faulthandler</code> is not enabled, this will not be shown to the user when running with Python, whereas Pytest enables the <code>faulthandler</code> for you.</p> <p>Happens on two of my computers, Windows 11 and Windows 10. Python 3.10 and Python 3.12. This has definitely also worked before so I am not sure what has changed.</p>
<python><python-3.x><multiprocessing>
2025-08-25 16:04:39
1
694
kevinlinxc
79,745,957
13,568,108
Cloud Composer Upgrade: composer-2.13.9-airflow-2.9.3 -> composer-2.13.9-airflow-2.10.5
<p>I am trying to upgrade my cloud composer instance from composer-2.13.9-airflow-2.9.3 to composer-2.13.9-airflow-2.10.5. I am getting many warnings in the dependency resolver like so:</p> <pre><code>WARNING: google-cloud-aiplatform 0.7.1 does not provide the extra 'evaluation' WARNING: google-cloud-aiplatform 0.7.0 does not provide the extra 'evaluation' WARNING: google-cloud-aiplatform 0.6.0 does not provide the extra 'evaluation' WARNING: google-cloud-aiplatform 0.5.1 does not provide the extra 'evaluation' WARNING: google-cloud-aiplatform 0.5.0 does not provide the extra 'evaluation' WARNING: google-cloud-aiplatform 0.4.0 does not provide the extra 'evaluation' WARNING: google-cloud-aiplatform 0.3.1 does not provide the extra 'evaluation' </code></pre> <p>I need to understand if this is a common issue with the google-cloud-aiplatform and if so, how to solve it / what packages most conflict with this library.</p>
<python><google-cloud-platform><google-cloud-composer>
2025-08-25 15:42:14
2
317
FVCC
79,745,940
885,650
Debugging parallel python program in interruptible sleep
<p>I have a mpi4py program, which runs well with <code>mpiexec -np 30 python3 -O myscript.py</code> at 100% CPU usage on each of the 30 CPUs.</p> <p>Now I am launching 8 instances with <code>mpiexec -np 16 python3 -O myscript.py</code>. That should be fine, I have 64 cores with 4 units each, <code>nproc</code> shows <code>256</code>. Load is near 128 (8x16).</p> <p>Nothing else is running, most of the terabyte of RAM is free, but most of my processes are at 25%-30% CPU usage in state S (interruptable sleep). My mpi4py code is a loop with first bcast of a value computed by the rank 0 node, then two times a scatter and a gather command. In between, the code run on each node uses onnx, numba, tensorflow.keras and other libraries (with OPENMPI_NUM_THREADS=1 and an onnx configuration that avoid further parallelisation). I do not use GPUs (everything is configured to use CPUs).</p> <p>It is not clear to me (at least I cannot find much online) how to narrow down where in my python program a MPI execution . Normally, I would run <code>python -c cProfile -o prof</code>, but I am unsure how to do that with mpiexec to get sensible (non-mangled) output files?</p> <p>The related question here: <a href="https://stackoverflow.com/questions/35987982/process-is-in-interruptible-sleep-how-to-find-out-what-it-is-waiting-for">Process is in interruptible sleep - how to find out what it is waiting for</a> uses a technique for C programs. Is there something similar for identifying where in a python program sleep time is spent?</p>
<python><linux><mpi><hpc><mpi4py>
2025-08-25 15:26:02
1
2,721
j13r
79,745,913
3,110,740
Hide legend labels with underscore in matplotlib>3.10
<p>I create a plot with seaborn that has several lines and error bands. In the legend, I only want to show some of the labels and hide others.</p> <p>Previously, it was possible to call <code>ax.legend(['one', '_', 'two'])</code> to hide specific labels/artists from appearing in the legend. However, in the newest matplotlib version (&gt;3.10) this behaviour has been removed. As I'm using seaborn, I don't have access to the <code>ax.plot</code> calls themself, in which I could set the labels manually.</p> <p>Is there any other way to selectively show only some legend labels and suppress others?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt df = pd.DataFrame({'timepoint': np.random.randint(0, 10, 1000), 'class': np.random.choice(['cat', 'dog', 'duck', 'hare'], 1000), 'probability': np.random.rand(1000) }) sns.lineplot(df, x='timepoint', y='probability', hue='class') # previously, providing underscores as names was hiding the label # this has unfortunately been deprecated plt.legend(['cat', *['_']*5, 'hare']) </code></pre> <p><a href="https://i.sstatic.net/7W88jmeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7W88jmeK.png" alt="enter image description here" /></a></p>
<python><matplotlib><seaborn><legend>
2025-08-25 14:58:47
2
2,270
skjerns
79,745,352
1,942,868
Get the attribute data by another attribute beautifulsoup
<p>I want to parse the HTML like this below with beautiful soup</p> <pre><code>. . &lt;meta property=&quot;og:image&quot; content=&quot;https://test.com/test.jp&quot; /&gt; &lt;meta property=&quot;og:description&quot; content=&quot;mydescription&quot; /&gt; &lt;meta http-equiv=&quot;X-UA-Compatible&quot; content=&quot;IE=edge,chrome=1&quot; /&gt; &lt;meta http-equiv=&quot;content-style-type&quot; content=&quot;text/css&quot; /&gt; . . </code></pre> <p>With this script,</p> <pre><code>bs = BeautifulSoup(response.text, 'html.parser') metas = bs.find_all('meta') </code></pre> <p>I can get the array of <code>meta</code> tag</p> <p>However I want to get the attribute content such as <code>https://test.com/test.jp</code> by the key attribute <code>property=&quot;og:image&quot;</code></p> <p>Like</p> <pre><code>metas.find('property:&quot;og:image&quot;').content </code></pre> <p>How can I make it ?</p>
<python><beautifulsoup>
2025-08-25 04:44:00
2
12,599
whitebear
79,745,232
824,624
pandas conert date failed - pd.to_datetime(df['xxx'], format='%Y-%m-%d').dt.date
<p>I am facing one little problem. I am storing some date time data and the data is</p> <pre><code>#secCode,secName,announcementTitle,announcementId,announcementTime 003816,xxx name,2024report,1222913141,1743004800000 </code></pre> <p>the date time column is clearly string - 1743004800000, so when I try to convert it</p> <pre><code>df['announcementTime'] = pd.to_datetime(df['announcementTime'], format='%Y-%m-%d').dt.date </code></pre> <p>I got this error</p> <pre><code> File &quot;/root/miniconda3/lib/python3.12/site-packages/pandas/core/tools/datetimes.py&quot;, line 1072, in to_datetime values = convert_listlike(arg._values, format) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/root/miniconda3/lib/python3.12/site-packages/pandas/core/tools/datetimes.py&quot;, line 435, in _convert_listlike_datetimes return _array_strptime_with_fallback(arg, name, utc, format, exact, errors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: time data &quot;1743004800000&quot; doesn't match format &quot;%Y-%m-%d&quot;, at position 0. You might want to try: - passing `format` if your strings have a consistent format; - passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format; - passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this. df['announcementTime'] = pd.to_datetime(df['announcementTime'], format='%Y-%m-%d') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/root/miniconda3/lib/python3.12/site-packages/pandas/core/tools/datetimes.py&quot;, line 1072, in to_datetime </code></pre>
<python><pandas>
2025-08-24 23:44:24
1
8,168
user824624
79,744,904
12,415,855
How to copy a picture cell using python with xlwings?
<p>I try to copy the picture cells in column B using xlwings with the following code-snippet to first read all the images:</p> <pre><code>import os import sys import xlwings as xw path = os.path.abspath(os.path.dirname(sys.argv[0])) fn = os.path.join(path, &quot;Interactive Price List-Schnellstartneu (1).xlsx&quot;) wb = xw.Book (fn) ws = wb.sheets[0] inpPics = ws.pictures print(len(inpPics)) </code></pre> <p>But it seems with this statement I only get the image at the upper left side and not the icons from column B.</p> <p>This is how the cell look like in exel when I select eg. <code>B12</code>:</p> <p><a href="https://i.sstatic.net/DdjfFgg4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DdjfFgg4.png" alt="enter image description here" /></a></p> <p>And this is how the overall excel-sheet looks like with images I need in column B</p> <p><a href="https://i.sstatic.net/1KBIaW3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1KBIaW3L.png" alt="enter image description here" /></a></p> <p>Is there any way to get the pictures from column B using xlwings (and afterward copying them for example to another excel-sheet)?</p>
<python><xlwings>
2025-08-24 13:34:19
1
1,515
Rapid1898
79,744,744
1,224,075
Why is mypy ignoring stub file?
<p>I have the two files in the following directory structure -</p> <pre><code>. ├── mymod.py └── mymod.pyi </code></pre> <p>The files are as follows -</p> <pre class="lang-py prettyprint-override"><code>def add(a, b): return a + b if __name__ == &quot;__main__&quot;: add(None, None) </code></pre> <pre class="lang-py prettyprint-override"><code>from typing import ( Union, ) def add(a: Union[int, str], b: Union[int, str]) -&gt; Union[int, str]: ... </code></pre> <p>I do not have any environment variables or config for stub files. The mypy cache is also cleared.</p> <p>When I run <code>mypy mymod.py</code> why does it not generate any errors?</p>
<python><mypy>
2025-08-24 08:15:04
0
2,107
tinkerbeast
79,744,735
9,087,250
Trigger DBT core jobs for Snowflake using Airflow
<p>Airflow is installed using Docker and it is running fine. Now I am trying to add dbt-core, dbt-snowflake and astronomer-cosmos python packages to the Airflow image in order to run the DBT core jobs using Airflow.</p> <p>I am trying to add the Python libraries using <code>requirements.txt file</code>. The contents of the <code>requirements.txt</code> file are:</p> <pre><code>astronomer-cosmos==1.9.0 dbt-core==1.9.0 dbt-snowflake==1.9.0 psycopg2-binary==2.9.9 </code></pre> <p>In order to replicate my issue: The contents of <code>docker-compose.yaml</code> are:</p> <pre><code>version: '3.8' x-airflow-common: &amp;airflow-common build: context: . dockerfile: Dockerfile env_file: .env environment: AIRFLOW__CORE__EXECUTOR: LocalExecutor AIRFLOW__CORE__FERNET_KEY: '' AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true' AIRFLOW__CORE__LOAD_EXAMPLES: 'false' AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow AIRFLOW__API__AUTH_BACKENDS: 'airflow.api.auth.backend.basic_auth' AIRFLOW__WEBSERVER__SECRET_KEY: 'mysecretkey' volumes: - ./dags:/opt/airflow/dags - ./logs:/opt/airflow/logs - ./plugins:/opt/airflow/plugins - ./dbt:/opt/airflow/dbt services: postgres: image: postgres:15 environment: POSTGRES_USER: airflow POSTGRES_PASSWORD: airflow POSTGRES_DB: airflow volumes: - postgres-db-volume:/var/lib/postgresql/data airflow-webserver: &lt;&lt;: *airflow-common command: webserver ports: - &quot;8080:8080&quot; depends_on: - postgres airflow-scheduler: &lt;&lt;: *airflow-common command: scheduler depends_on: - postgres airflow-init: &lt;&lt;: *airflow-common command: &gt; bash -c &quot;airflow db migrate &amp;&amp; airflow users create --username airflow --password airflow --firstname Air --lastname Flow --role Admin --email airflow@example.com&quot; depends_on: - postgres volumes: - ./wait-for-postgres.sh:/wait-for-postgres.sh volumes: postgres-db-volume: </code></pre> <p>The contents of the <code>Dockerfile</code> are:</p> <pre><code>FROM apache/airflow:2.10.5-python3.9 USER root RUN apt-get update &amp;&amp; apt-get install -y \ build-essential \ unixodbc-dev \ gcc \ g++ \ curl \ &amp;&amp; apt-get clean USER airflow COPY requirements.txt . RUN pip install --no-cache-dir -r requirements.txt </code></pre> <p>The contents of the <code>wait-for-postgres.sh</code> are:</p> <pre><code>#!/bin/bash set -e host=&quot;$1&quot; shift cmd=&quot;$@&quot; until pg_isready -h &quot;$host&quot; -p 5432 -U airflow; do &gt;&amp;2 echo &quot;Postgres is unavailable - sleeping&quot; sleep 2 done &gt;&amp;2 echo &quot;Postgres is up - executing command&quot; exec $cmd </code></pre> <p>Now I am doing trial and error with the version of the <code>astronomer-cosmos, dbt-core and dbt-snowflake</code> if I increase the version to the latest or decrease the version to the older ones, it's breaking some where.</p> <p>The error is:</p> <pre><code>File &quot;/home/airflow/.local/lib/python3.9/site-packages/cosmos/dbt/runner.py&quot;, line 109, in handle_exception_if_needed raise CosmosDbtRunError(f&quot;dbt invocation did not complete with unhandled error: {result.exception}&quot;) cosmos.exceptions.CosmosDbtRunError: dbt invocation did not complete with unhandled error: No such option: --no-static-parser [2025-08-24, 07:19:24 UTC] {taskinstance.py:1226} INFO - Marking task as FAILED. dag_id=dev_test_dag, task_id=dev_test_dag.example_1_run, </code></pre> <p>The other error is:</p> <pre><code>The conflict is caused by: 1404.9 dbt-core 1.9.0 depends on protobuf&lt;6.0 and &gt;=5.0 1404.9 dbt-adapters 1.16.5 depends on protobuf&lt;7.0 and &gt;=6.0 1404.9 dbt-core 1.9.0 depends on protobuf&lt;6.0 and &gt;=5.0 1404.9 dbt-adapters 1.16.4 depends on protobuf&lt;7.0 and &gt;=6.0 1404.9 1404.9 To fix this you could try to: 1404.9 1. loosen the range of package versions you've specified 1404.9 2. remove package versions to allow pip to attempt to solve the dependency conflict </code></pre> <p>Now I need to know which versions should I use in the <code>requirements.txt</code> file?</p>
<python><snowflake-cloud-data-platform><airflow><dbt><astronomer>
2025-08-24 07:55:35
0
1,584
Teja Goud Kandula
79,744,440
703,421
How in python can I transform negative secs into struct time?
<p>I'd like to transform <code>-238204800</code> seconds to <code>&quot;1962:06:15 00:00:00&quot;</code>, using python 3.8.</p> <p>My code is :</p> <pre><code>from time import strftime, localtime secs = -238204800 print(strftime(&quot;%Y:%m:%d %H:%M:%S&quot;, localtime(secs)) </code></pre> <p><code>localtime()</code> gives <code>OSError: [Errno 22] Invalid argument</code>.</p> <p>How can I proceed such dates, given negative seconds ?</p>
<python><time><negative-number><localtime>
2025-08-23 16:48:44
1
2,279
Eric H.
79,744,397
4,794
Tiny numbers in ReportLab's table of contents
<p>I followed an <a href="https://docs.reportlab.com/reportlab/userguide/ch9_other_useful_flowables/#tableofcontents" rel="nofollow noreferrer">example</a> from the ReportLab documentation on how to add a table of contents, and it works fine in most situations. However, if a section title is very close to the full page width, then the page number gets shrunk to fit. Sometimes, it's shrunk to the point you can't see it at all.</p> <p>Has anyone found a way to work around this by forcing the section title to wrap earlier, leaving enough room for the page number?</p> <p>Here's a demonstration of the problem:</p> <p><a href="https://i.sstatic.net/cqMEw9gY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cqMEw9gY.png" alt="PDF screenshot" /></a></p> <p>You can see that with 3, 4, and 5 i's the page number shrinks and disappears.</p> <p>Here's my modified version of the example that generated the table of contents above:</p> <pre><code>from reportlab.lib.styles import ParagraphStyle as PS from reportlab.platypus import PageBreak from reportlab.platypus.paragraph import Paragraph from reportlab.platypus.doctemplate import PageTemplate, BaseDocTemplate from reportlab.platypus.tableofcontents import TableOfContents from reportlab.platypus.frames import Frame from reportlab.lib.units import cm class MyDocTemplate(BaseDocTemplate): def __init__(self, filename, **kw): self.allowSplitting = 0 BaseDocTemplate.__init__(self, filename, **kw) template = PageTemplate('normal', [Frame(2.5 * cm, 2.5 * cm, 15 * cm, 25 * cm, id='F1')]) self.addPageTemplates(template) def afterFlowable(self, flowable): &quot;Registers TOC entries.&quot; if flowable.__class__.__name__ == 'Paragraph': text = flowable.getPlainText() style = flowable.style.name if style == 'Heading1': self.notify('TOCEntry', (0, text, self.page)) if style == 'Heading2': self.notify('TOCEntry', (1, text, self.page)) h1 = PS(name='Heading1', fontSize=14, leading=16) h2 = PS(name='Heading2', fontSize=12, leading=14, leftIndent=10) # Build story. story = [] toc = TableOfContents() # For conciseness we use the same styles for headings and TOC entries toc.levelStyles = [h1, h2] story.append(toc) story.append(PageBreak()) story.append(Paragraph('Table of Contents Test', h1)) for i in range(10): story.append(Paragraph('XXXXX ' * 9 + 'i' * i, h2)) doc = MyDocTemplate('toc_test.pdf') doc.multiBuild(story) </code></pre> <p>I looked in the <a href="https://hg.reportlab.com/hg-public/reportlab/file/61ba11e7d143/src/reportlab/platypus/tableofcontents.py#l84" rel="nofollow noreferrer">ReportLab source code</a>, and found that it really is just shrinking the page number until it fits or reaches a font size of 1 pixel.</p>
<python><pdf><reportlab><tableofcontents>
2025-08-23 15:56:39
0
56,676
Don Kirkby
79,744,396
3,138,436
ouput unicode character by suppressing original keystroke of physical keyboard using evdev-python in linux
<p>I am using Kali 2022 (Linux) with xfce 4.16. I am experimenting with python <code>evdev</code> module. What I am trying to achieve is when pressing certain key on the keyboard (like letter 'a'), instead of letter 'a', I want to post a specific Unicode character on the focus element.In other words, I want to map key_A with a different unicode character which is not 'a'.</p> <p>In Kali Linux, I can print Unicode character manually by pressing <code>ctrl</code>+<code>shit</code>+<code>u</code> then type the Unicode hex-code for the character. But when I simulate this keystrokes using <code>uinput</code> in python,the simulated keystrokes post the indivisual characters (like u0985) in focus element. linux for some reason is not showing the unicode character represented by it.What is the problem in my code, and how it can print the actual Unicode.</p> <pre><code>import evdev from evdev import UInput, ecodes as e import time # Open the input device (replace with your actual device) dev = evdev.InputDevice('/dev/input/event2') dev.grab() # Grab the device to receive all events # Create a virtual uinput device with capabilities cap = {e.EV_KEY: [e.KEY_A, e.KEY_B,e.KEY_G,e.KEY_U,e.KEY_0,e.KEY_9,e.KEY_8,e.KEY_5,e.KEY_ENTER]} ui = UInput(cap, name='my_virtual_keyboard') for event in dev.read_loop(): if event.type == e.EV_KEY: key_event = evdev.categorize(event) if key_event.keystate == evdev.KeyEvent.key_down: # Simulate pressing Ctrl+Shift+U ui.write(event.type, e.KEY_LEFTCTRL, 1) ui.write(event.type, e.KEY_LEFTSHIFT, 1) ui.write(event.type, e.KEY_U, 1) ui.write(event.type, e.KEY_U, 0) # Release U ui.write(event.type, e.KEY_LEFTSHIFT, 0) # Release Shift ui.write(event.type, e.KEY_LEFTCTRL, 0) # Release Ctrl ui.syn() # Synchronize events time.sleep(0.1) # Small delay for the system to register the input mode # This would involve simulating individual key presses for '0', '9', '8', '5' # ... (code to simulate these key presses) ... ui.write(event.type, e.KEY_0, 1) ui.write(event.type, e.KEY_0, 0) ui.write(event.type, e.KEY_9, 1) ui.write(event.type, e.KEY_9, 0) ui.write(event.type, e.KEY_8, 1) ui.write(event.type, e.KEY_8, 0) ui.write(event.type, e.KEY_5, 1) ui.write(event.type, e.KEY_5, 0) ui.syn() time.sleep(0.1) # Small delay for the system to register the input mode # Simulate pressing Enter to confirm the Unicode input ui.write(event.type, e.KEY_ENTER, 1) ui.write(event.type, e.KEY_ENTER, 0) ui.syn() elif key_event.keystate == evdev.KeyEvent.key_up: pass else: # Re-inject other event types (e.g., EV_SYN) ui.write(event.type, event.code, event.value) #ungrab and close dev.ungrab() ui.close() </code></pre>
<python><linux><unicode><evdev><uinput>
2025-08-23 15:55:36
1
9,194
AL-zami
79,744,362
315,168
import tensorflow statement crashes or hangs on macOS
<p>I have the following Python statement, which I cannot execute in Jupyter Notebook or Python REPL:</p> <pre class="lang-py prettyprint-override"><code>import tensorflow </code></pre> <pre class="lang-none prettyprint-override"><code>Python 3.11.10 (main, Sep 20 2024, 14:23:57) [Clang 16.0.0 (clang-1600.0.26.3)] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import tensorflow </code></pre> <p>On a console, you see the following thread locking error crashing the interpreter:</p> <pre class="lang-none prettyprint-override"><code>libc++abi: terminating due to uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument </code></pre> <p>When run in a Jupyter Notebook, it will cause the kernel to hang indefinitely.</p> <ul> <li>What causes this?</li> <li>How to make it run?</li> </ul> <p>Python 3.11. Tensorflow 2.20.0. Pyarrow 21.0.0. macOS ARM.</p> <p><a href="https://stackoverflow.com/questions/66773247/libcabi-dylib-terminating-with-uncaught-exception-of-type-std-1system-er">Related earlier question</a>.</p> <p>Below is the traceback of <code>python -vvv -c &quot;import tensorflow&quot;</code></p> <pre class="lang-none prettyprint-override"><code># trying /Users/moo/.pyenv/versions/3.11.10/lib/python3.11/lib-dynload/pyarrow.py # trying /Users/moo/.pyenv/versions/3.11.10/lib/python3.11/lib-dynload/pyarrow.pyc # /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/__pycache__/__init__.cpython-311.pyc matches /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/__init__.py # code object from '/Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/__pycache__/__init__.cpython-311.pyc' # trying /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/_generated_version.cpython-311-darwin.so # trying /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/_generated_version.abi3.so # trying /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/_generated_version.so # trying /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/_generated_version.py # /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/__pycache__/_generated_version.cpython-311.pyc matches /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/_generated_version.py # code object from '/Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/__pycache__/_generated_version.cpython-311.pyc' import 'pyarrow._generated_version' # &lt;_frozen_importlib_external.SourceFileLoader object at 0x3113025d0&gt; # trying /Users/moo/code/predictor2/.venv/lib/python3.11/site-packages/pyarrow/lib.cpython-311-darwin.so libc++abi: terminating due to uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument </code></pre>
<python><tensorflow><pyarrow>
2025-08-23 15:16:34
1
84,872
Mikko Ohtamaa
79,744,130
12,415,855
Get url of second tab using selenium?
<p>I'm trying to get the second tab url using the following code -</p> <pre><code>import time import os, sys from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By print(f&quot;Program name: {os.path.basename(__file__)}&quot;) TRIAL = True BREAK_OUT = 10 SAVE_INTERVAL = 1 WAIT = 1 path = os.path.abspath(os.path.dirname(sys.argv[0])) print(f&quot;Checking Browser driver...&quot;) options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_argument('--use-gl=swiftshader') options.add_argument('--disable-gpu') options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') options.add_argument(&quot;start-maximized&quot;) options.add_argument('--log-level=3') options.add_argument('--enable-unsafe-swiftshader') srv=Service() driver = webdriver.Chrome (service=srv, options=options) waitWD = WebDriverWait (driver, 10) link = f&quot;https://www.solrenview.com/&quot; driver.get (link) waitWD.until(EC.frame_to_be_available_and_switch_to_it((By.XPATH,&quot;//iframe[@id='gMapsFr']&quot;))) driver.execute_script(&quot;arguments[0].click();&quot;, waitWD.until(EC.element_to_be_clickable((By.XPATH, '//label[@for=&quot;ByInstaller&quot;]')))) time.sleep(0.5) waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id=&quot;searchBox&quot;]'))).clear() time.sleep(0.5) waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id=&quot;searchBox&quot;]'))).send_keys(&quot;Barrier Solar Inc.&quot;) time.sleep(0.5) driver.execute_script(&quot;arguments[0].click();&quot;, waitWD.until(EC.element_to_be_clickable((By.XPATH, '//div[@id=&quot;autoSuggestionsList&quot;]//li')))) time.sleep(3) driver.execute_script(&quot;arguments[0].click();&quot;, waitWD.until(EC.element_to_be_clickable((By.XPATH, f'(//a[contains(text(), &quot;Double E&quot;)])[{1}]')))) currentURL = driver.current_url print(currentURL) input(&quot;Press!&quot;) </code></pre> <p>As you can see when running the code a second tab is opening with the url</p> <pre><code>https://www.solrenview.com/SolrenView/mainFr.php?siteId=5746 </code></pre> <p>is opening.</p> <p>When i run the program i allways only get this output:</p> <pre><code>https://www.solrenview.com/ </code></pre> <p>How can i get this url from the second (active) tab?</p>
<python><selenium-webdriver><webdriver><webdriverwait><window-handles>
2025-08-23 09:17:49
2
1,515
Rapid1898
79,744,026
11,649,567
Ram Memory leak when scripting a Sampling Trainer for a Bert Encoder and LSTM Decoder Tensorflow on GPU
<p>I wrote the module attached below. However, I notice a constant increase of RAM until I get an out of memory error. The code runs on CPU without a problem (except the slow training time). It can finish the first training step, however the memory doesn't get release after the first training cycle and it crashes out. I would be extremely grateful if someone can solve the problem or guess what the problem is.</p> <pre><code>import tensorflow as tf import gc from config import SAMPLING_PROBABILITY #tf.config.run_functions_eagerly(True) class ScheduledSamplingTrainer(tf.keras.Model): def __init__(self, model): super().__init__() self.model = model self.schedule_prob = tf.Variable(SAMPLING_PROBABILITY, trainable=False, dtype=tf.float32) self.loss_fn = tf.keras.losses.MeanSquaredError() self.metric = tf.keras.metrics.MeanAbsoluteError() #@tf.function def train_step(self, data): x, y_true = data (encoder_input_ids, encoder_attention_mask, decoder_inputs) = x y_true = tf.cast(y_true, tf.float32) batch_size = tf.shape(encoder_input_ids)[0] pose_dim = tf.shape(y_true)[-1] seq_len = int(decoder_inputs.shape[1]) # mask per batch: (batch, seq_len, 1) mask = tf.cast(tf.expand_dims(tf.reduce_any(y_true != 0, axis=-1), -1), tf.float32) go_frame = tf.fill((batch_size, 1, pose_dim), tf.constant(0, dtype=tf.float32)) outputs_ta = tf.TensorArray(dtype=tf.float32, size=seq_len, dynamic_size=False) use_pred_mask = tf.random.uniform((seq_len,), dtype=tf.float32) &lt; self.schedule_prob with tf.GradientTape() as tape: encoder_outputs = self.model.bert_encoder( input_ids=encoder_input_ids, attention_mask=encoder_attention_mask, training=True ) mean_embedding = tf.reduce_mean(encoder_outputs.last_hidden_state, axis=1) mean_embedding = tf.cast(mean_embedding, tf.float32) initial_h = self.model.decoder_initial_state_h(mean_embedding) initial_c = self.model.decoder_initial_state_c(mean_embedding) encoder_states = [initial_h, initial_c] decoder_input = go_frame for t in range(seq_len): decoder_outputs, h, c = self.model.decoder_lstm( self.model.masking(decoder_input), initial_state=encoder_states ) encoder_states = [h,c] next_frame = self.model.decoder_dense(decoder_outputs[:, -1:, :]) next_frame = tf.cast(next_frame, outputs_ta.dtype) # scheduled sampling next_input = tf.where(use_pred_mask[t], next_frame, y_true[:, t:t+1, :]) decoder_input = next_input outputs_ta = outputs_ta.write(t, next_frame) outputs = outputs_ta.stack() # [seq_len, batch, 1, pose_dim] outputs = tf.transpose(outputs, [1, 0, 2, 3]) # [batch, seq_len, 1, pose_dim] outputs = tf.squeeze(outputs, axis=2) masked_outputs = outputs * mask masked_y_true = y_true * mask # --- PER-TIMESTEP LOSS --- loss = 0.0 for t in range(seq_len): loss += tf.reduce_mean(tf.square(masked_y_true[:, t, :] - masked_outputs[:, t, :])) loss /= tf.cast(seq_len, tf.float32) #loss = tf.cast(loss, self.model.trainable_variables[0].dtype) grads = tape.gradient(loss, self.model.trainable_variables) self.optimizer.apply_gradients(zip(grads, self.model.trainable_variables)) # masked metric update self.metric.update_state(masked_y_true, masked_outputs) del outputs_ta, decoder_input, decoder_outputs, next_frame, masked_outputs, masked_y_true, tape gc.collect() return {&quot;loss&quot;: loss, &quot;mae&quot;: self.metric.result()} </code></pre>
<python><python-3.x><tensorflow><deep-learning><lstm>
2025-08-23 05:40:46
1
400
mashtock
79,743,969
15,745,459
Python Win32Com: How do I open a password-protected Powerpoint?
<p>I tried to use the below code to open a password-protected ppt, but I got error: TypeError: Open() got an unexpected keyword argument 'Password'</p> <p>Is it possible to use Win32Com to open a password-protected powerrpoint? Thank you</p> <pre><code> import win32com.client as win32 xl = win32.Dispatch('PowerPoint.Application') xl.Visible = True xl.DisplayAlerts = True wb = xl.Presentations.Open(r'C:\Users\Downloads\PPT File.ppt', Password=&quot;123&quot;) </code></pre>
<python><powerpoint><win32com>
2025-08-23 01:56:10
1
395
Peter
79,743,851
10,708,345
Django cannot register oauth2_provider and rest_framework to INSTALLED_APPS
<p>I am working on <a href="https://github.com/FeelHippo/django_authentication" rel="nofollow noreferrer">this</a> weekend project, to learn Django and I am stuck.</p> <p>Before adding the REST framework (<a href="https://github.com/FeelHippo/django_authentication/commit/3afb0f3d4b9c8b7c41fc862098e49758eae07e43" rel="nofollow noreferrer">one to last commit in the repo</a>), everything was working just alright.</p> <p>Once I added the <code>djangorestframework</code> library, everything fell apart. Now, whether you run the app on a venv or on DOcker, you will get the same result:</p> <p><code>RuntimeError: Model class oauth2_provider.models.Application doesn't declare an explicit app_label and isn't in an application in INSTALLED_APPS.</code></p> <p>... or a similar one related to rest_framework's Token. Both libraries are installed.</p> <p>I understand the problem is that something is wrong with <code>INSTALLED_APPS</code>. But... WHAT?!</p> <p>The <a href="https://www.django-rest-framework.org/api-guide/authentication/#tokenauthentication" rel="nofollow noreferrer">docs</a> make me think I am not doing anything wrong. If you look at the imports, <code>from rest_framework etc etc</code> and <code>from oauth2_provider etc etc</code> seem to be the problem.</p> <p>Error logs and stack traces are quite useless, there is not useful info in there.</p>
<python><django><pip><venv>
2025-08-22 21:01:22
2
320
Fi Li Ppo
79,743,751
16,706,763
Elasticsearch index creation from Python, results in error, for known mapping
<p>I have the following code:</p> <pre class="lang-py prettyprint-override"><code>from elasticsearch import Elasticsearch client = Elasticsearch( &quot;https://myhost&quot;, api_key=&quot;mykey&quot;, request_timeout=30, # Increase timeout duration max_retries=10, retry_on_timeout=True ) dest_index = &quot;course_info_eng_v3d6_destination&quot; client.indices.create( index=dest_index, mappings={ &quot;dynamic&quot;: &quot;true&quot;, &quot;dynamic_templates&quot;: [ { &quot;all_text_fields&quot;: { &quot;match_mapping_type&quot;: &quot;string&quot;, &quot;mapping&quot;: { &quot;analyzer&quot;: &quot;iq_text_base&quot;, &quot;fields&quot;: { &quot;delimiter&quot;: { &quot;analyzer&quot;: &quot;iq_text_delimiter&quot;, &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot; }, &quot;joined&quot;: { &quot;search_analyzer&quot;: &quot;q_text_bigram&quot;, &quot;analyzer&quot;: &quot;i_text_bigram&quot;, &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot; }, &quot;prefix&quot;: { &quot;search_analyzer&quot;: &quot;q_prefix&quot;, &quot;analyzer&quot;: &quot;i_prefix&quot;, &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;docs&quot; }, &quot;enum&quot;: { &quot;ignore_above&quot;: 2048, &quot;type&quot;: &quot;keyword&quot; }, &quot;stem&quot;: { &quot;analyzer&quot;: &quot;iq_text_stem&quot;, &quot;type&quot;: &quot;text&quot; } } } } } ], &quot;properties&quot;: { &quot;course_id&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;delimiter&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;iq_text_delimiter&quot; }, &quot;enum&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 2048 }, &quot;joined&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;i_text_bigram&quot;, &quot;search_analyzer&quot;: &quot;q_text_bigram&quot; }, &quot;prefix&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;docs&quot;, &quot;analyzer&quot;: &quot;i_prefix&quot;, &quot;search_analyzer&quot;: &quot;q_prefix&quot; }, &quot;stem&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;analyzer&quot;: &quot;iq_text_stem&quot; } }, &quot;analyzer&quot;: &quot;iq_text_base&quot; }, &quot;course_name_backup&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;delimiter&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;iq_text_delimiter&quot; }, &quot;enum&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 2048 }, &quot;joined&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;i_text_bigram&quot;, &quot;search_analyzer&quot;: &quot;q_text_bigram&quot; }, &quot;prefix&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;docs&quot;, &quot;analyzer&quot;: &quot;i_prefix&quot;, &quot;search_analyzer&quot;: &quot;q_prefix&quot; }, &quot;stem&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;analyzer&quot;: &quot;iq_text_stem&quot; } }, &quot;analyzer&quot;: &quot;iq_text_base&quot; }, &quot;embedding&quot;: { &quot;type&quot;: &quot;dense_vector&quot;, &quot;dims&quot;: 768, &quot;index&quot;: &quot;true&quot;, &quot;similarity&quot;: &quot;cosine&quot;, &quot;index_options&quot;: { &quot;type&quot;: &quot;int8_hnsw&quot;, &quot;m&quot;: 16, &quot;ef_construction&quot;: 100 } }, &quot;language&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;delimiter&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;iq_text_delimiter&quot; }, &quot;enum&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 2048 }, &quot;joined&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;i_text_bigram&quot;, &quot;search_analyzer&quot;: &quot;q_text_bigram&quot; }, &quot;prefix&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;docs&quot;, &quot;analyzer&quot;: &quot;i_prefix&quot;, &quot;search_analyzer&quot;: &quot;q_prefix&quot; }, &quot;stem&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;analyzer&quot;: &quot;iq_text_stem&quot; } }, &quot;analyzer&quot;: &quot;iq_text_base&quot; }, &quot;provider&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;delimiter&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;iq_text_delimiter&quot; }, &quot;enum&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 2048 }, &quot;joined&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;i_text_bigram&quot;, &quot;search_analyzer&quot;: &quot;q_text_bigram&quot; }, &quot;prefix&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;docs&quot;, &quot;analyzer&quot;: &quot;i_prefix&quot;, &quot;search_analyzer&quot;: &quot;q_prefix&quot; }, &quot;stem&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;analyzer&quot;: &quot;iq_text_stem&quot; } }, &quot;analyzer&quot;: &quot;iq_text_base&quot; }, &quot;related_skills&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;delimiter&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;iq_text_delimiter&quot; }, &quot;enum&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 2048 }, &quot;joined&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;i_text_bigram&quot;, &quot;search_analyzer&quot;: &quot;q_text_bigram&quot; }, &quot;prefix&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;docs&quot;, &quot;analyzer&quot;: &quot;i_prefix&quot;, &quot;search_analyzer&quot;: &quot;q_prefix&quot; }, &quot;stem&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;analyzer&quot;: &quot;iq_text_stem&quot; } }, &quot;analyzer&quot;: &quot;iq_text_base&quot; }, &quot;timestamp&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;delimiter&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;iq_text_delimiter&quot; }, &quot;enum&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 2048 }, &quot;joined&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;i_text_bigram&quot;, &quot;search_analyzer&quot;: &quot;q_text_bigram&quot; }, &quot;prefix&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;docs&quot;, &quot;analyzer&quot;: &quot;i_prefix&quot;, &quot;search_analyzer&quot;: &quot;q_prefix&quot; }, &quot;stem&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;analyzer&quot;: &quot;iq_text_stem&quot; } }, &quot;analyzer&quot;: &quot;iq_text_base&quot; }, &quot;user&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;fields&quot;: { &quot;delimiter&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;iq_text_delimiter&quot; }, &quot;enum&quot;: { &quot;type&quot;: &quot;keyword&quot;, &quot;ignore_above&quot;: 2048 }, &quot;joined&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;freqs&quot;, &quot;analyzer&quot;: &quot;i_text_bigram&quot;, &quot;search_analyzer&quot;: &quot;q_text_bigram&quot; }, &quot;prefix&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;index_options&quot;: &quot;docs&quot;, &quot;analyzer&quot;: &quot;i_prefix&quot;, &quot;search_analyzer&quot;: &quot;q_prefix&quot; }, &quot;stem&quot;: { &quot;type&quot;: &quot;text&quot;, &quot;analyzer&quot;: &quot;iq_text_stem&quot; } }, &quot;analyzer&quot;: &quot;iq_text_base&quot; } } } ) </code></pre> <p>But when running it, results in the following error:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;C:\Users\david\Desktop\courserecsVDBs\test.py&quot;, line 62, in &lt;module&gt; client.indices.create( File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\utils.py&quot;, line 452, in wrapped return api(*args, **kwargs) File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\indices.py&quot;, line 705, in create return self.perform_request( # type: ignore[return-value] File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 422, in perform_request return self._client.perform_request( File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 271, in perform_request response = self._perform_request( File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 351, in _perform_request return api(*args, **kwargs) File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\indices.py&quot;, line 705, in create return self.perform_request( # type: ignore[return-value] File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 422, in perform_request return self._client.perform_request( File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 271, in perform_request response = self._perform_request( File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 351, in _perform_request return self.perform_request( # type: ignore[return-value] File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 422, in perform_request return self._client.perform_request( File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 271, in perform_request response = self._perform_request( File &quot;C:\Users\david\Desktop\courserecsVDBs\venv\lib\site-packages\elasticsearch\_sync\client\_base.py&quot;, line 351, in _perform_request raise HTTP_EXCEPTIONS.get(meta.status, ApiError)( elasticsearch.BadRequestError: BadRequestError(400, 'mapper_parsing_exception', 'Failed to parse mapping: dynamic template [all_text_fields] has invalid content [{&quot;match_mapping_type&quot;:&quot;string&quot;,&quot;mapping&quot;:{&quot;analyzer&quot;:&quot;iq_text_base&quot;,&quot;fields&quot;:{&quot;delimiter&quot;:{&quot;analyzer&quot;:&quot;iq_text_delimiter&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;freqs&quot;},&quot;joined&quot;:{&quot;search_analyzer&quot;:&quot;q_text_bigram&quot;,&quot;analyzer&quot;:&quot;i_text_bigram&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;freqs&quot;},&quot;prefix&quot;:{&quot;search_analyzer&quot;:&quot;q_prefix&quot;,&quot;analyzer&quot;:&quot;i_prefix&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;docs&quot;},&quot;enum&quot;:{&quot;ignore_above&quot;:elasticsearch.BadRequestError: BadRequestError(400, 'mapper_parsing_exception', 'Failed to parse mapping: dynamic template [all_text_fields] has invalid content [{&quot;match_mapping_type&quot;:&quot;string&quot;,&quot;mapping&quot;:{&quot;analyzer&quot;:&quot;iq_text_base&quot;,&quot;fields&quot;:{&quot;delimiter&quot;:{&quot;analyzer&quot;:&quot;iq_text_delimiter&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;freqs&quot;},&quot;joined&quot;:{&quot;search_analyzer&quot;:&quot;q_text_bigram&quot;,&quot;analyzer&quot;:&quot;i_text_bigram&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;freqs&quot;},&quot;prefix&quot;:{&quot;search_analyzer&quot;:&quot;q_prefix&quot;,&quot;analyzer&quot;:&quot;i_prefix&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;docs&quot;},&quot;enum&quot;:{&quot;ignore_above&quot;:string&quot;,&quot;mapping&quot;:{&quot;analyzer&quot;:&quot;iq_text_base&quot;,&quot;fields&quot;:{&quot;delimiter&quot;:{&quot;analyzer&quot;:&quot;iq_text_delimiter&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;freqs&quot;},&quot;joined&quot;:{&quot;search_analyzer&quot;:&quot;q_text_bigram&quot;,&quot;analyzer&quot;:&quot;i_text_bigram&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;freqs&quot;},&quot;prefix&quot;:{&quot;search_analyzer&quot;:&quot;q_prefix&quot;,&quot;analyzer&quot;:&quot;i_prefix&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;docs&quot;},&quot;enum&quot;:{&quot;ignore_above&quot;:alyzer&quot;:&quot;i_text_bigram&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;freqs&quot;},&quot;prefix&quot;:{&quot;search_analyzer&quot;:&quot;q_prefix&quot;,&quot;analyzer&quot;:&quot;i_prefix&quot;,&quot;type&quot;:&quot;text&quot;,&quot;index_options&quot;:&quot;docs&quot;},&quot;enum&quot;:{&quot;ignore_above&quot;:2048,&quot;type&quot;:&quot;keyword&quot;},&quot;stem&quot;:{&quot;analyzer&quot;:&quot;iq_text_stem&quot;,&quot;type&quot;:&quot;text&quot;}}}}], attempted to validate it with the following match_mapping_type: [string]') </code></pre> <p>It is important to notice that:</p> <ol> <li>I have copied and pasted the mapping, from an already existing, functional ES data store, with the ultimate aim to <a href="https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-mtermvectors" rel="nofollow noreferrer">reindex</a> the contents of that data store, into my new data store <code>dest_index</code> (see screen capture after this paragraph)</li> <li>I can connect succesfully to the client, as I have other scripts involving it; however this is the very first time I attempt to create an ES index from a Python script.</li> <li>The ES version I am using (both in the host and the Python package) is 8.18.1.</li> </ol> <p><a href="https://i.sstatic.net/65L65ShB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65L65ShB.png" alt="enter image description here" /></a></p> <p>What am I doing wrong?</p>
<python><python-3.x><elasticsearch>
2025-08-22 18:42:06
1
879
David Espinosa
79,743,607
2,434,094
Modifying the character representation of stdout Pipe values
<p>I'm trying to use <code>subprocess.popen()</code> to execute a Linux command and then process the stdout streams to extract the average CPU load. Here is my code:</p> <pre><code>#!/usr/bin/env python import subprocess Answers = &quot;&quot; IPs = [&quot;192.168.70.13&quot;] IPName = [&quot;xxx&quot;] for x in IPs: cmd2 = ['top -n 1 | grep &quot;%Cpu&quot;'] j = IPs.index(x) print(cmd2) proc2 = subprocess.Popen(cmd2, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) o, e = proc2.communicate() TempCpu = o.decode('ascii') TempCpu = TempCpu.strip() print(&quot;lenght is &quot; + str(len(TempCpu))) print(&quot;TempCpu = &quot; + TempCpu) k = TempCpu.find('id') print(&quot;index of id = &quot; + str(k)) value = TempCpu[k-6:k] print(value) print(IPName[j] + ': ' + o.decode('ascii')) Answers = Answers + IPName[j] + ' == ' + o.decode('ascii') print('Answers were : /n' + Answers) </code></pre> <p>The objective is to retrieve the CPU load and CPU temperatures of a Raspberry Pi running Debian Linux. The output was</p> <blockquote> <pre class="lang-none prettyprint-override"><code>['top -n 1 | grep &quot;%Cpu&quot;'] lenght is 387 TempCpu = %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st index of id = 169 xxx: %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st Answers were : xxx == %Cpu(s): 0.0 us, 0.0 sy, 0.0 ni,100.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st </code></pre> </blockquote> <p>If one visually examines the TempCpu value it is about 50 characters long and not the 387 which the <code>length()</code> function returns. Thus obviously the TempCpu variable is not the character value it is displaying. Which leads to other issues when I try to extract specific values, e.g. &quot;id&quot;.</p> <p>Thus the question is how do I capture the output from the <code>popen()</code> function in a format which I can manipulate?</p>
<python><python-3.x><linux><popen>
2025-08-22 15:53:44
1
415
RDK
79,743,175
2,889,733
Show a user input popup that waits for user input without rerunning entire app
<p>I'm making a streamlit app in which I want to show a popup to the user to get some interim input from them. The problem is that I'm not able to get it to re-run in isolation. Here's my popup code for reference. This resides in a <code>utils.py</code> that I've imported into the main page script <code>page.py</code>:</p> <pre><code>@st.fragment @st.dialog(&quot;Enter Your Input&quot;) def get_user_input_popup(prompt_text): if 'dialog_waiting' not in st.session_state: st.session_state.dialog_waiting = True if st.session_state.dialog_waiting: with st.form(&quot;popup_form&quot;): user_input = st.text_input(&quot;Your response:&quot;, placeholder=&quot;Type here...&quot;) col1, col2 = st.columns(2) with col1: submit = st.form_submit_button(&quot;Submit&quot;, type=&quot;primary&quot;) with col2: cancel = st.form_submit_button(&quot;Cancel&quot;) if submit and user_input.strip(): st.session_state.dialog_result = user_input.strip() st.session_state.dialog_waiting = False st.rerun(scope=&quot;fragment&quot;) elif cancel: st.session_state.dialog_result = &quot;CANCELLED&quot; st.session_state.dialog_waiting = False st.rerun(scope=&quot;fragment&quot;) elif submit: st.error(&quot;Please enter some text!&quot;) return None else: return st.session_state.dialog_result </code></pre> <p>And here's the part in <code>page.py</code> that calls the above popup utility:</p> <pre><code>with st.spinner(&quot;Waiting for input...&quot;): response = get_user_input_popup(&quot;Need more info:&quot;) print(&quot;\n\n&quot;, response, &quot;\n\n&quot;, type(response)) </code></pre> <p>I thought, if I use <code>st.fragment</code>, the re-runs would trigger only the popup re-running. But what actually happens is that the whole page (i.e. <code>page.py</code>) gets re-run. And if I don't use <code>st.rerun</code> lines in the function, then the execution zooms past everything in the popup function and just returns None.</p> <p>How I can get this to work like a regular dialog box that actually waits for the user input?</p>
<python><streamlit>
2025-08-22 08:46:13
0
371
user9343456
79,743,154
393,010
How to let pylsp find imports in local project?
<p>When starting my editor with pylsp inside a projects folder I can see the LSP root dir is chosen to be the top folder, where <code>.git</code> is. However the python code is located in a subfolder so the corresponding python imports are not understood by the lsp. (goto_definition results in &quot;No location found&quot;).</p> <p>How can I make pylsp understand where my python code is?</p> <p>More details:</p> <p>The project structure is:</p> <pre class="lang-none prettyprint-override"><code>. ├── .git ├── src │   ├── python │   │   ├── foo.py │   │   ├── app │   │   │   └── core.py │   └── lang2 │      ├── bar.lang2 │ └── settings.py ├── system ├── tests │   └── run.py └── util    ├── examples    │   └── example.py    └── run.py </code></pre> <p>after starting my editor I would like these imports to connect correctly into the <code>src/python</code> folder.</p> <pre class="lang-py prettyprint-override"><code>import foo from app import core from app.core import Thing </code></pre> <p>I use Neovim with python-lsp-server as my LSP. What are my options to fix this issue?</p> <p>I tried to launch it like this, but it had no obviuos affect:</p> <pre><code>PYTHONPATH=src/python/ nvim </code></pre> <p>The more general the solution, the better.</p>
<python><neovim><pylsp><python-lsp-server>
2025-08-22 08:31:23
0
5,626
Moberg
79,743,002
4,577,688
How do I create a Pytorch Dataset from multiple files where each file has multiple batches
<p>How do I create an dataset that reads in data from multiple files, but where each file has lots of rows or batches.</p> <p>For example, I have a partitioned parquet dataset (created with <code>pandas.to_parquet</code>), with text or embeddings in each row.</p> <p>The multiple batches per file, multiple file setup seems pretty common for text data, but for some reason all the examples for <code>torch.utils.data.Dataset</code> assume one observation per file (like for images) or 1 file for the entire corpus.</p> <p>torchdata seems to have some tools for this, but this seems to be no longer maintained.</p> <p>Is there an established module in pytorch or 3rd party one that handles this file structure?</p>
<python><machine-learning><pytorch><nlp>
2025-08-22 05:35:23
1
3,840
dule arnaux
79,742,770
15,229,911
'Variable not allowed in type expression' warning when creating SQLAchemy session using DI
<p>I have a basic SQLAlchemy setup for a FastAPI project. I made a dependency for a database session like most tutorials suggest doing:</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import sessionmaker SessionLocal = sessionmaker(engine, **sqlalchemy_session_options) def get_session() -&gt; SessionLocal: with SessionLocal() as database: yield database </code></pre> <p>But Pylance gives me &quot;Variable not allowed in type expression&quot; warning:</p> <p><a href="https://i.sstatic.net/H3NzLjJO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3NzLjJO.png" alt="Pylance warning" /></a></p> <p>I'm not super familiar with Pylance or type annotations. What should I do to fix this warning? Or should I just suppress it?</p>
<python><sqlalchemy><python-typing><pyright>
2025-08-21 21:08:35
1
324
postcoital-solitaire
79,742,629
2,829,355
Convert dictionary rows to new dataframe
<p>After importing some nested JSON data, I'm trying to create a new dataframe from all of the dictionary key / value pairs in an existing column.</p> <p>Starting point:</p> <pre><code>&gt;&gt;&gt; df['schedules'] 0 {'3263524': 'Group 1 CORE DAYS', '3263525': 'Group 1 CORE NIGHTS', '3263526': 'Group 1 EDUCATION', '3263527': 'Group 1 ROUNDING'} 1 {'3263524': 'Group 1 CORE DAYS', '3881368': 'VS Days', '3881370': 'VS Education Shift A', '3881455': 'VS Education Shift B'} ... </code></pre> <p>Desired output:</p> <pre><code>&gt;&gt;&gt; df_schedules id schedule_name 0 3263524 Group 1 CORE DAYS 1 3263525 Group 1 CORE NIGHTS 2 3263526 Group 1 EDUCATION 3 3263527 Group 1 ROUNDING 4 3881368 VS Days 5 3881370 VS Education Shift A 6 3881455 VS Education Shift B </code></pre> <p>I can get close with <code>pd.DataFrame.from_dict</code>, but need to name the columns and create an index.</p> <pre><code>df_schedules = pd.DataFrame.from_dict(df['schedules'].iloc[0], orient='index') </code></pre>
<python><json><pandas>
2025-08-21 17:46:44
1
831
skohrs
79,742,395
243,031
GitHub Actions Docker build not able to reach to GCP artifacts
<p>I have Python package in GCP Artifactory and want to access that in Docker image.</p> <p>GitHub Actions workflow:</p> <pre class="lang-yaml prettyprint-override"><code>jobs: publish: runs-on: ubuntu-latest permissions: contents: 'read' id-token: 'write' # Required for OIDC steps: - name: Checkout code uses: actions/checkout@v4 - name: Auth to GCP via OIDC uses: google-github-actions/auth@v2 with: workload_identity_provider: projects/&lt;ID&gt;/locations/global/workloadIdentityPools/&lt;PROJCTWIP&gt;/providers/&lt;OIDCPROVIDER&gt; service_account: &lt;SERVICEACCOUNT&gt;@&lt;PROJECTID&gt;.iam.gserviceaccount.com - name: Set up gcloud uses: google-github-actions/setup-gcloud@v2 with: project_id: &lt;PROJECTID&gt; - name: Install build dependencies run: | pip install keyrings.google-artifactregistry-auth - name: Configure Docker for Artifact Registry run: gcloud auth configure-docker us-central1-docker.pkg.dev - name: Build and push Docker image uses: docker/build-push-action@v5 with: context: . push: true tags: us-central1-docker.pkg.dev/&lt;PROJECTID&gt;/&lt;DOCKERREGISTRY&gt;/myimage:latest </code></pre> <p><code>Dockerfile</code>:</p> <pre class="lang-none prettyprint-override"><code>FROM python:3.12-slim RUN apt-get update &amp;&amp; \ apt-get install -y --no-install-recommends git openssh-client &amp;&amp; \ apt-get clean &amp;&amp; \ rm -rf /var/lib/apt/lists/* RUN pip install --extra-index-url https://us-central1-python.pkg.dev/&lt;PROJECTID&gt;/&lt;PYPIREGISTRY&gt;/simple/ my-backend==0.1.3 &amp;&amp; \ pip install gunicorn CMD [&quot;gunicorn&quot;, &quot;my_backend.app:app&quot;] </code></pre> <p>It gives error</p> <pre class="lang-none prettyprint-override"><code>7 [3/3] RUN pip install --extra-index-url https://us-central1-python.pkg.dev/&lt;PROJECTID&gt;/&lt;PYPIREGISTRY&gt;/simple/ my-backend==0.1.3 &amp;&amp; pip install gunicorn #7 1.481 Looking in indexes: https://pypi.org/simple, https://us-central1-python.pkg.dev/&lt;PROJECTID&gt;/&lt;PYPIREGISTRY&gt;/simple/, https://us-central1-python.pkg.dev/&lt;PROJECTID&gt;/&lt;PYPIREGISTRY&gt;/simple/ #7 1.890 User for us-central1-python.pkg.dev: User for us-central1-python.pkg.dev: WARNING: There was an error checking the latest version of pip. #7 2.197 ERROR: Exception: #7 2.197 Traceback (most recent call last): #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/cli/base_command.py&quot;, line 106, in _run_wrapper #7 2.197 status = _inner_run() #7 2.197 ^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/cli/base_command.py&quot;, line 97, in _inner_run #7 2.197 return self.run(options, args) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/cli/req_command.py&quot;, line 67, in wrapper #7 2.197 return func(self, options, args) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/commands/install.py&quot;, line 386, in run #7 2.197 requirement_set = resolver.resolve( #7 2.197 ^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/resolver.py&quot;, line 95, in resolve #7 2.197 result = self._result = resolver.resolve( #7 2.197 ^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 546, in resolve #7 2.197 state = resolution.resolve(requirements, max_rounds=max_rounds) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 397, in resolve #7 2.197 self._add_to_criteria(self.state.criteria, r, parent=None) #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_vendor/resolvelib/resolvers.py&quot;, line 173, in _add_to_criteria #7 2.197 if not criterion.candidates: #7 2.197 ^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_vendor/resolvelib/structs.py&quot;, line 156, in __bool__ #7 2.197 return bool(self._sequence) #7 2.197 ^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 174, in __bool__ #7 2.197 return any(self) #7 2.197 ^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 162, in &lt;genexpr&gt; #7 2.197 return (c for c in iterator if id(c) not in self._incompatible_ids) #7 2.197 ^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py&quot;, line 49, in _iter_built #7 2.197 for version, func in infos: #7 2.197 ^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/resolution/resolvelib/factory.py&quot;, line 307, in iter_index_candidate_infos #7 2.197 result = self._finder.find_best_candidate( #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/index/package_finder.py&quot;, line 892, in find_best_candidate #7 2.197 candidates = self.find_all_candidates(project_name) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/index/package_finder.py&quot;, line 833, in find_all_candidates #7 2.197 page_candidates = list(page_candidates_it) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/index/sources.py&quot;, line 193, in page_candidates #7 2.197 yield from self._candidates_from_page(self._link) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/index/package_finder.py&quot;, line 793, in process_project_url #7 2.197 index_response = self._link_collector.fetch_response(project_url) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/index/collector.py&quot;, line 448, in fetch_response #7 2.197 return _get_index_content(location, session=self.session) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/index/collector.py&quot;, line 352, in _get_index_content #7 2.197 resp = _get_simple_response(url, session=session) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/index/collector.py&quot;, line 131, in _get_simple_response #7 2.197 resp = session.get( #7 2.197 ^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_vendor/requests/sessions.py&quot;, line 602, in get #7 2.197 return self.request(&quot;GET&quot;, url, **kwargs) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/network/session.py&quot;, line 523, in request #7 2.197 return super().request(method, url, *args, **kwargs) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_vendor/requests/sessions.py&quot;, line 589, in request #7 2.197 resp = self.send(prep, **send_kwargs) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_vendor/requests/sessions.py&quot;, line 710, in send #7 2.197 r = dispatch_hook(&quot;response&quot;, hooks, r, **kwargs) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_vendor/requests/hooks.py&quot;, line 30, in dispatch_hook #7 2.197 _hook_data = hook(hook_data, **kwargs) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/network/auth.py&quot;, line 505, in handle_401 #7 2.197 username, password, save = self._prompt_for_password(parsed.netloc) #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/network/auth.py&quot;, line 460, in _prompt_for_password #7 2.197 username = ask_input(f&quot;User for {netloc}: &quot;) if self.prompting else None #7 2.197 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ #7 2.197 File &quot;/usr/local/lib/python3.12/site-packages/pip/_internal/utils/misc.py&quot;, line 243, in ask_input #7 2.197 return input(message) #7 2.197 ^^^^^^^^^^^^^^ #7 2.197 EOFError: EOF when reading a line #7 ERROR: process &quot;/bin/sh -c pip install --extra-index-url https://us-central1-python.pkg.dev/&lt;PROJECTID&gt;/&lt;PYPIREGISTRY&gt;/simple/ my-backend==0.1.3 &amp;&amp; pip install gunicorn&quot; did not complete successfully: exit code: 2 </code></pre> <p>If I run <code>pip install --extra-index-url https://us-central1-python.pkg.dev/&lt;PROJECTID&gt;/&lt;PYPIREGISTRY&gt;/simple/ my-backend==0.1.3</code> in my laptop, it works fine.</p> <p>What I have to do to make sure Docker in GitHub Actions is able to reach to GCP Artifactory using OIDC?</p>
<python><docker><google-cloud-platform><github-actions><google-artifact-registry>
2025-08-21 13:41:11
1
21,411
NPatel
79,742,342
7,483,211
Cannot activate conda/mamba environment in Claude Code session
<p>I'm trying out Claude Code and want it to activate and use a pre-existing conda environment.</p> <p>I told it to run <code>micromamba activate py12</code> to activate the <code>py12</code> environment. This doesn't seem to work: afterwards there's still no <code>python</code> available.</p> <p>This is what I've tried:</p> <pre><code>&gt; just run micromamba activate py12 Bash(micromamba activate py12) ⎿  (No content) ⏺ Bash(python generate_clades_tsv.py) ⎿  Error: zsh: command not found: python </code></pre> <p>It seems that the activation failed silently - maybe some environment variables weren't set appropriately.</p> <p>How can one activate a conda environment from within Claude Code?</p>
<python><conda><mamba><micromamba><claude-code>
2025-08-21 13:08:04
2
10,272
Cornelius Roemer
79,742,202
13,045,595
Create a Pip Wheel for OpenCV Built from Source to Prevent Overwriting with Library Dependencies
<p>I have a Dockerfile that builds OpenCV from source with cuda. The build itself succeeds, but pip doesn’t recognize this custom installation. As a result, when I later install a Python package that depends on OpenCV, pip fetches a prebuilt opencv-python wheel, which then overrides (or hides) my source build. According to this suggestion (<a href="https://stackoverflow.com/a/62642547/13045595">https://stackoverflow.com/a/62642547/13045595</a> by @jkr), the proper fix is to package my custom build as a wheel and install it so pip treats the dependency as satisfied. I tried that, and although the wheel build script finishes without errors, the installed wheel doesn’t actually work. Could you review the wheel-building script and/or the Dockerfile and let me know what needs to be adjusted or optimized?</p> <pre><code># syntax=docker/dockerfile:1 # Requires Docker BuildKit for cache mounts (build with DOCKER_BUILDKIT=1) FROM nvidia/cuda:12.1.0-cudnn8-devel-ubuntu22.04 # Environment setup ENV DEBIAN_FRONTEND=noninteractive \ LANG=en_US.UTF-8 \ LC_ALL=en_US.UTF-8 \ PYTHONIOENCODING=UTF-8 \ NVIDIA_DRIVER_CAPABILITIES=all \ CUDA_HOME=/usr/local/cuda \ PATH=/usr/local/cuda/bin:$PATH \ LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH \ CCACHE_DIR=/ccache # Create ccache directory RUN mkdir -p /ccache # Set locale RUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends \ locales \ &amp;&amp; locale-gen en_US.UTF-8 \ &amp;&amp; update-locale LANG=en_US.UTF-8 # Essential system dependencies RUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends \ # Build tools build-essential \ cmake \ pkg-config \ ninja-build \ ccache \ # GCC-10 for CUDA 12.x compatibility gcc-10 \ g++-10 \ # Version control git \ # Utilities wget \ curl \ unzip \ vim \ nano \ htop \ software-properties-common \ &amp;&amp; add-apt-repository ppa:deadsnakes/ppa -y \ &amp;&amp; apt-get update \ &amp;&amp; apt-get install -y --no-install-recommends \ python3.9 \ python3.9-dev \ python3.9-venv \ python3.9-distutils \ python3-pip \ # OpenCV dependencies libgtk-3-dev \ libgtk2.0-dev \ libavcodec-dev \ libavformat-dev \ libswscale-dev \ libswresample-dev \ libtbb-dev \ libjpeg-dev \ libpng-dev \ libtiff-dev \ libwebp-dev \ libopenexr-dev \ libdc1394-dev \ libgstreamer1.0-dev \ libgstreamer-plugins-base1.0-dev \ libv4l-dev \ libxvidcore-dev \ libx264-dev \ libfdk-aac-dev \ libmp3lame-dev \ libtheora-dev \ libvorbis-dev \ libxine2-dev \ libopencore-amrnb-dev \ libopencore-amrwb-dev \ # Math libraries libopenblas-dev \ liblapack-dev \ libatlas-base-dev \ libeigen3-dev \ libhdf5-dev \ # OpenGL support libgl1-mesa-glx \ libglu1-mesa-dev \ libglew-dev \ # Qt5 for OpenCV GUI qtbase5-dev \ qtchooser \ qt5-qmake \ qtbase5-dev-tools \ # Additional utilities libprotobuf-dev \ protobuf-compiler \ &amp;&amp; apt-get clean \ &amp;&amp; rm -rf /var/lib/apt/lists/* # Set Python 3.9 as default RUN update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.9 1 \ &amp;&amp; update-alternatives --install /usr/bin/python python /usr/bin/python3.9 1 \ &amp;&amp; python3.9 -m pip install --upgrade pip setuptools wheel # Install numpy first (required for OpenCV Python bindings) # Also install wheel building tools RUN python3 -m pip install --no-cache-dir \ numpy==1.26.4 \ wheel \ setuptools \ build \ auditwheel \ patchelf # Remove any pre-existing OpenCV packages that might conflict RUN pip3 uninstall -y opencv-python opencv-python-headless opencv-contrib-python opencv-contrib-python-headless || true # Set GCC-10 as default for OpenCV build RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 \ &amp;&amp; update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 100 # Configure ccache RUN ccache --set-config=cache_dir=/ccache \ &amp;&amp; ccache --set-config=max_size=15G \ &amp;&amp; ccache --set-config=compression=true \ &amp;&amp; ccache --set-config=compression_level=6 # OpenCV 4.8.0 with CUDA support and NONFREE modules ARG OPENCV_VERSION=4.8.0 ARG CUDA_ARCH_BIN=&quot;7.5;8.0;8.6;8.9&quot; # opencv and opencv-contrib : # including NONFREE code -could be used or not- # Use BuildKit cache mount for ccache to speed up rebuilds RUN --mount=type=cache,target=/ccache \ cd /opt/ &amp;&amp;\ wget https://github.com/opencv/opencv/archive/${OPENCV_VERSION}.zip -O opencv.zip &amp;&amp;\ unzip -qq opencv.zip &amp;&amp;\ rm opencv.zip &amp;&amp;\ wget https://github.com/opencv/opencv_contrib/archive/${OPENCV_VERSION}.zip -O opencv-co.zip &amp;&amp;\ unzip -qq opencv-co.zip &amp;&amp;\ rm opencv-co.zip &amp;&amp;\ mkdir /opt/opencv-${OPENCV_VERSION}/build &amp;&amp; cd /opt/opencv-${OPENCV_VERSION}/build &amp;&amp;\ # Configure OpenCV with proper CUDA linking:\ # - Use stubs for linking but set RPATH to real CUDA libs for runtime\ # - This prevents runtime failures from stub library references\ cmake \ -D BUILD_opencv_java=OFF \ -D WITH_CUDA=ON \ -D BUILD_opencv_dnn=ON \ -D CUDA_ARCH_BIN=&quot;${CUDA_ARCH_BIN}&quot; \ -D WITH_CUBLAS=ON \ -D WITH_CUDNN=ON \ -D OPENCV_DNN_CUDA=ON \ -D ENABLE_FAST_MATH=ON \ -D CUDA_FAST_MATH=ON \ -D WITH_CUFFT=ON \ -D WITH_OPENGL=ON \ -D WITH_QT=ON \ -D WITH_IPP=ON \ -D WITH_TBB=ON \ -D WITH_EIGEN=ON \ -D WITH_OPENEXR=ON \ -D BUILD_TESTS=OFF \ -D BUILD_PERF_TESTS=OFF \ -D BUILD_DOCS=OFF \ -D BUILD_EXAMPLES=OFF \ -D WITH_OPENCL=ON \ -D WITH_OPENMP=ON \ -D WITH_FFMPEG=ON \ -D WITH_V4L=ON \ -D WITH_GSTREAMER=ON \ -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_C_COMPILER_LAUNCHER=ccache \ -D CMAKE_CXX_COMPILER_LAUNCHER=ccache \ -D CMAKE_CUDA_COMPILER_LAUNCHER=ccache \ -D OPENCV_EXTRA_MODULES_PATH=/opt/opencv_contrib-${OPENCV_VERSION}/modules \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D PYTHON3_EXECUTABLE=$(which python3) \ -D PYTHON3_INCLUDE_DIR=$(python3 -c &quot;import sysconfig; print(sysconfig.get_paths()['include'])&quot;) \ -D PYTHON3_LIBRARY=$(python3 -c &quot;import sysconfig; cfg=sysconfig.get_config_vars(); print(cfg['LIBDIR'] + '/' + cfg['LDLIBRARY'])&quot;) \ -D PYTHON3_PACKAGES_PATH=$(python3 -c &quot;import sysconfig; print(sysconfig.get_paths()['platlib'])&quot;) \ -D PYTHON3_NUMPY_INCLUDE_DIRS=$(python3 -c &quot;import numpy; print(numpy.get_include())&quot;) \ -D BUILD_opencv_python3=ON \ -D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda \ -D CMAKE_LIBRARY_PATH=/usr/local/cuda/lib64/stubs \ -D CMAKE_INSTALL_RPATH=/usr/local/cuda/lib64 \ -D CMAKE_BUILD_RPATH=/usr/local/cuda/lib64 \ -D CMAKE_INSTALL_RPATH_USE_LINK_PATH=ON \ -D OPENCV_ENABLE_NONFREE=ON \ .. &amp;&amp;\ make -j$(nproc) &amp;&amp; \ make install &amp;&amp; \ ldconfig # Copy constraints file to prevent PyPI OpenCV packages from overwriting our build COPY constraints.txt /tmp/constraints.txt # Build and install OpenCV as a proper Python wheel COPY build_opencv_wheel.py /tmp/ RUN cd /tmp &amp;&amp; \ python3 build_opencv_wheel.py ${OPENCV_VERSION} --output-dir /tmp/opencv_wheel --install &amp;&amp; \ # Save the wheel for potential reuse cp /tmp/opencv_wheel/dist/*.whl /tmp/ 2&gt;/dev/null || true &amp;&amp; \ # Clean up build directories rm -rf /opt/opencv-${OPENCV_VERSION} &amp;&amp; \ rm -rf /opt/opencv_contrib-${OPENCV_VERSION} &amp;&amp; \ rm -rf /tmp/opencv_wheel &amp;&amp; \ rm /tmp/build_opencv_wheel.py # Quick verification that OpenCV is installed RUN python3 -c &quot;import cv2; print(f'OpenCV installed: {cv2.__version__}')&quot; &amp;&amp; \ pip3 list | grep opencv </code></pre> <p>build_opencv_wheel.py</p> <pre><code>#!/usr/bin/env python3 &quot;&quot;&quot; Build a proper Python wheel for OpenCV after compilation. This script creates a wheel package from the compiled OpenCV libraries. &quot;&quot;&quot; import os import sys import shutil import subprocess from pathlib import Path from setuptools import setup, find_packages import sysconfig def find_cv2_module(): &quot;&quot;&quot;Find the compiled cv2 module.&quot;&quot;&quot; # Check both platlib and purelib for key in ('platlib', 'purelib'): site_packages = Path(sysconfig.get_paths()[key]) cv2_path = site_packages / 'cv2' if cv2_path.exists() and cv2_path.is_dir(): return cv2_path # Common locations where cv2.so might be after make install possible_paths = [ Path('/usr/local/lib/python3.9/dist-packages/cv2'), Path('/usr/local/lib/python3.9/site-packages/cv2'), Path('/usr/lib/python3/dist-packages/cv2'), Path('/usr/lib/python3.9/dist-packages/cv2'), Path('/usr/lib/python3.9/site-packages/cv2'), ] for path in possible_paths: if path.exists() and path.is_dir(): return path # Try to find cv2.*.so files for path in [Path('/usr/local/lib'), Path('/usr/lib'), Path('/usr/local')]: cv2_files = list(path.glob('**/cv2*.so')) if cv2_files: return cv2_files[0].parent raise FileNotFoundError(&quot;Could not find compiled cv2 module&quot;) def create_wheel(opencv_version='4.8.0', output_dir='/tmp/opencv_wheel'): &quot;&quot;&quot;Create a wheel from the compiled OpenCV.&quot;&quot;&quot; # Find the cv2 module cv2_path = find_cv2_module() print(f&quot;Found cv2 module at: {cv2_path}&quot;) # Create temporary build directory build_dir = Path(output_dir) build_dir.mkdir(parents=True, exist_ok=True) # Copy cv2 module directly to build directory (top-level package) cv2_dest = build_dir / 'cv2' if cv2_dest.exists(): shutil.rmtree(cv2_dest) shutil.copytree(cv2_path, cv2_dest) # Ensure cv2 has proper version info version_file = cv2_dest / '__version__.py' version_file.write_text(f'__version__ = &quot;{opencv_version}+cuda12.1&quot;\n') # Create setup.py setup_py = build_dir / 'setup.py' setup_content = f''' from setuptools import setup, find_packages setup( name='opencv-contrib-python', version='{opencv_version}+cuda12.1', description='OpenCV Python bindings with contrib modules and CUDA support (custom build)', long_description='Custom build of OpenCV {opencv_version} with CUDA support and contrib modules', author='OpenCV Team', author_email='', url='https://opencv.org', license='Apache 2.0', packages=['cv2', 'cv2.data', 'cv2.misc', 'cv2.mat_wrapper', 'cv2.utils'], package_data={{ 'cv2': [ '*.so', '*.pyd', 'config*.py', '__version__.py', 'data/*.xml', 'data/*.dat', 'misc/**/*.json', 'mat_wrapper/*.json', 'utils/**/*.py' ] }}, include_package_data=True, install_requires=['numpy&gt;=1.19.3'], python_requires='&gt;=3.6', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'Intended Audience :: Science/Research', 'License :: OSI Approved :: Apache Software License', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.9', 'Programming Language :: C++', 'Topic :: Scientific/Engineering', 'Topic :: Scientific/Engineering :: Image Recognition', 'Topic :: Software Development :: Libraries :: Python Modules', ], zip_safe=False, ) ''' setup_py.write_text(setup_content) # Create MANIFEST.in to include all necessary files manifest = build_dir / 'MANIFEST.in' manifest.write_text(''' recursive-include cv2 *.so *.pyd *.py recursive-include cv2/data * recursive-include cv2/misc * recursive-include cv2/mat_wrapper * recursive-include cv2/utils * ''') # Build the wheel using modern build module os.chdir(build_dir) result = subprocess.run( [sys.executable, '-m', 'build', '--wheel', '--outdir', 'dist'], capture_output=True, text=True ) if result.returncode != 0: print(f&quot;Error building wheel: {result.stderr}&quot;) # Fallback to legacy method if build module fails print(&quot;Falling back to legacy setup.py bdist_wheel...&quot;) result = subprocess.run( [sys.executable, 'setup.py', 'bdist_wheel'], capture_output=True, text=True ) if result.returncode != 0: print(f&quot;Error with fallback build: {result.stderr}&quot;) return None # Find the built wheel dist_dir = build_dir / 'dist' wheels = list(dist_dir.glob('*.whl')) if not wheels: print(&quot;No wheel file found after build&quot;) return None wheel_file = wheels[0] print(f&quot;Successfully built wheel: {wheel_file}&quot;) # Skip auditwheel for internal use (often fails with CUDA libs) # Uncomment if you need manylinux compatibility for distribution # try: # repaired_dir = build_dir / 'wheelhouse' # repaired_dir.mkdir(exist_ok=True) # # result = subprocess.run( # ['auditwheel', 'repair', str(wheel_file), # '--plat', 'manylinux_2_17_x86_64', # '-w', str(repaired_dir)], # capture_output=True, # text=True # ) # # if result.returncode == 0: # repaired_wheels = list(repaired_dir.glob('*.whl')) # if repaired_wheels: # print(f&quot;Repaired wheel: {repaired_wheels[0]}&quot;) # return repaired_wheels[0] # else: # print(f&quot;Auditwheel repair failed: {result.stderr}&quot;) # print(&quot;Using original wheel&quot;) # except Exception as e: # print(f&quot;Auditwheel not available or failed: {e}&quot;) # print(&quot;Using original wheel&quot;) return wheel_file def main(): import argparse parser = argparse.ArgumentParser( description='Build a Python wheel for compiled OpenCV' ) parser.add_argument( 'version', nargs='?', default='4.8.0', help='OpenCV version without +cuda suffix (default: 4.8.0)' ) parser.add_argument( '--output-dir', default='/tmp/opencv_wheel', help='Output directory for wheel build (default: /tmp/opencv_wheel)' ) parser.add_argument( '--install', action='store_true', help='Install the wheel after building' ) args = parser.parse_args() # Build the wheel wheel_file = create_wheel(args.version, args.output_dir) if wheel_file and args.install: print(f&quot;Installing wheel: {wheel_file}&quot;) result = subprocess.run( [sys.executable, '-m', 'pip', 'install', str(wheel_file), '--force-reinstall'], capture_output=True, text=True ) if result.returncode == 0: print(&quot;Wheel installed successfully&quot;) # Verify installation subprocess.run([sys.executable, '-c', 'import cv2; print(f&quot;OpenCV {cv2.__version__} installed&quot;)']) else: print(f&quot;Installation failed: {result.stderr}&quot;) return 1 return 0 if __name__ == '__main__': sys.exit(main()) </code></pre> <p>constraints.txt</p> <pre><code># Block all PyPI OpenCV packages - we use our custom build opencv-python==99.99.99 # Impossible version to prevent installation opencv-python-headless==99.99.99 # Impossible version to prevent installation opencv-contrib-python==99.99.99 # Impossible version to prevent installation opencv-contrib-python-headless==99.99.99 # Impossible version to prevent installation </code></pre>
<python><docker><opencv><pip><python-wheel>
2025-08-21 11:14:00
0
335
M.Akyuzlu
79,742,136
785,523
How to read table names from a MySQL file containing PARTITION keyword using sqlglot?
<p>I am trying to read a <code>MySql</code> <code>SQL</code> file containing <code>PARTITION</code> keyword. I am getting the below error</p> <pre><code>An error occurred during parsing: Expecting ). Line 19, Col: 26. created_at`) USING BTREE ) PARTITION BY RANGE ( UNIX_TIMESTAMP(audit_ts)) ( PARTITION p2401 VALUES LESS THAN (UNIX_TIMESTAMP('2024-02-01 00:00:00')), PARTITION p2402 VALUES LESS THAN (UNIX_TIMES </code></pre> <p>My python code is like below</p> <pre><code>import logging import sqlglot from sqlglot import exp # Configure logger logging.basicConfig( level=logging.INFO, format=&quot;%(asctime)s [%(levelname)s] %(message)s&quot; ) logger = logging.getLogger(__name__) def extract_table_names(sql_file_path, dialect=&quot;mysql&quot;): &quot;&quot;&quot; Parse the SQL file and return a set of unique table names found. Logs errors if file not found or parsing fails. &quot;&quot;&quot; try: with open(sql_file_path, &quot;r&quot;) as f: sql_script = f.read() expression_trees = sqlglot.parse(sql_script, dialect=dialect) table_names = set() for tree in expression_trees: table_names.update([table.name for table in tree.find_all(exp.Table)]) return table_names except FileNotFoundError: logger.error(f&quot;File not found: {sql_file_path}&quot;) return set() except Exception as e: logger.error(f&quot;Error parsing `{sql_file_path}`: {e}&quot;) return set() if __name__ == &quot;__main__&quot;: sql_file = &quot;changeLogs/health-service/create_db.sql&quot; tables = extract_table_names(sql_file) logger.info(f&quot;Total unique tables found: {len(tables)}&quot;) logger.info(f&quot;Table names: {sorted(list(tables))}&quot;) </code></pre> <p>My SQL file is like below</p> <pre><code>-- liquibase formatted sql -- changeset debraj.manna@nexla.com:NEX-18235 CREATE TABLE IF NOT EXISTS `audit_control` ( `id` BIGINT auto_increment NOT NULL, `message_id` VARCHAR(100) DEFAULT NULL, `resource_type` VARCHAR(30) NOT NULL, `event_type` VARCHAR(30) NOT NULL, `resource_id` INT NOT NULL, `origin` VARCHAR(100) NOT NULL, `created_at` TIMESTAMP NOT NULL, `body` mediumtext NOT NULL, `audit_ts` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (id, audit_ts), KEY `audit_control_resource_type_resource_id_IDX` (`resource_type`,`resource_id`) USING BTREE, KEY `audit_control_created_at_IDX` (`created_at`) USING BTREE ) PARTITION BY RANGE ( UNIX_TIMESTAMP(audit_ts)) ( PARTITION p2401 VALUES LESS THAN (UNIX_TIMESTAMP('2024-02-01 00:00:00')), PARTITION p2402 VALUES LESS THAN (UNIX_TIMESTAMP('2024-03-01 00:00:00')), PARTITION p2403 VALUES LESS THAN (UNIX_TIMESTAMP('2024-04-01 00:00:00')), PARTITION p2404 VALUES LESS THAN (UNIX_TIMESTAMP('2024-05-01 00:00:00')), PARTITION p2405 VALUES LESS THAN (UNIX_TIMESTAMP('2024-06-01 00:00:00')), PARTITION p2406 VALUES LESS THAN (UNIX_TIMESTAMP('2024-07-01 00:00:00')), PARTITION p2407 VALUES LESS THAN (UNIX_TIMESTAMP('2024-08-01 00:00:00')), PARTITION p2408 VALUES LESS THAN (UNIX_TIMESTAMP('2024-09-01 00:00:00')), PARTITION p2409 VALUES LESS THAN (UNIX_TIMESTAMP('2024-10-01 00:00:00')), PARTITION p2410 VALUES LESS THAN (UNIX_TIMESTAMP('2024-11-01 00:00:00')), PARTITION p2411 VALUES LESS THAN (UNIX_TIMESTAMP('2024-12-01 00:00:00')), PARTITION p2412 VALUES LESS THAN (UNIX_TIMESTAMP('2025-01-01 00:00:00')), PARTITION pN VALUES LESS THAN MAXVALUE ); CREATE TABLE IF NOT EXISTS `audit_coordination` ( `id` BIGINT auto_increment NOT NULL, `message_id` VARCHAR(100) DEFAULT NULL, `event_type` VARCHAR(30) NOT NULL, `created_at` TIMESTAMP NOT NULL, `body` TEXT NOT NULL, `audit_ts` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (id, audit_ts) ) PARTITION BY RANGE ( UNIX_TIMESTAMP(audit_ts)) ( PARTITION p2401 VALUES LESS THAN (UNIX_TIMESTAMP('2024-02-01 00:00:00')), PARTITION p2402 VALUES LESS THAN (UNIX_TIMESTAMP('2024-03-01 00:00:00')), PARTITION p2403 VALUES LESS THAN (UNIX_TIMESTAMP('2024-04-01 00:00:00')), PARTITION p2404 VALUES LESS THAN (UNIX_TIMESTAMP('2024-05-01 00:00:00')), PARTITION p2405 VALUES LESS THAN (UNIX_TIMESTAMP('2024-06-01 00:00:00')), PARTITION p2406 VALUES LESS THAN (UNIX_TIMESTAMP('2024-07-01 00:00:00')), PARTITION p2407 VALUES LESS THAN (UNIX_TIMESTAMP('2024-08-01 00:00:00')), PARTITION p2408 VALUES LESS THAN (UNIX_TIMESTAMP('2024-09-01 00:00:00')), PARTITION p2409 VALUES LESS THAN (UNIX_TIMESTAMP('2024-10-01 00:00:00')), PARTITION p2410 VALUES LESS THAN (UNIX_TIMESTAMP('2024-11-01 00:00:00')), PARTITION p2411 VALUES LESS THAN (UNIX_TIMESTAMP('2024-12-01 00:00:00')), PARTITION p2412 VALUES LESS THAN (UNIX_TIMESTAMP('2025-01-01 00:00:00')), PARTITION pN VALUES LESS THAN MAXVALUE ); </code></pre> <p>Can someone let me know if this is expected and <a href="https://github.com/tobymao/sqlglot" rel="nofollow noreferrer">sqlglot</a> does not support <code>PARTITION</code>?</p> <ul> <li>sqlglot = 27.8.0</li> <li>python = 3.9.6</li> </ul>
<python><sqlglot>
2025-08-21 10:04:15
1
6,954
tuk
79,742,127
1,432,694
What to do when the pandas error position overflows?
<p>So, I'm experimenting with pandas with the <a href="https://datasets.imdbws.com/" rel="nofollow noreferrer">IMDB files</a>, especially <code>title.basic.tsv</code>. When trying to parse the <code>runtimeMinutes</code> column to <code>&quot;Int64&quot;</code>, I get an error</p> <pre><code>ValueError: Unable to parse string &quot;Reality-TV&quot; at position 47993 </code></pre> <p>However, neither line 47994, nor the directly surrounding lines, contain the string <code>Reality-TV</code>. So I started deleting entries from the beginning of the data file, and indeed, the reported position got down. Just until I deleted exactly 47994 entries, at which point the error became</p> <pre><code>ValueError: Unable to parse string &quot;Reality-TV&quot; at position 65535 </code></pre> <p>This raised my suspicion that the position variable is a <code>uint16</code> which overflows? Is there a way to deal with this kind of problem, and get the correct line which is making trouble?</p> <hr /> <p>Here is the command I used:</p> <pre class="lang-py prettyprint-override"><code>titles = pd.read_csv(&quot;title.basics.tsv&quot;, sep=&quot;\t&quot;, dtype={ &quot;runtimeMinutes&quot;: &quot;Int64&quot;, }, na_values={ &quot;runtimeMinutes&quot;: [&quot;\\N&quot;], }) </code></pre>
<python><pandas><integer-overflow>
2025-08-21 09:59:25
2
685
red_trumpet
79,742,073
416,983
Export a PyTorch custom hash table lookup OP to ONNX
<p>I have implemented a PyTorch OP which accepts a torch.int64 tensor and outputs another torch.int64 tensor by looking up a hash table with predefined key-value pairs.</p> <p>The torch part is implemented like this, where <code>table</code> is a wrapper around a C++ hash table:</p> <pre><code>class TableLookup(torch.autograd.Function): @staticmethod def forward(ctx, table, x): y = table.lookup(x) return y @staticmethod def symbolic(g, table, x): raise NotImplementedError </code></pre> <p>The forward pass works nicely, backward isn't needed since <code>x</code> and <code>y</code> are integral tensors</p> <p>My question is how to implement the <code>symbolic</code> so that torch could correctly export this OP to ONNX format.</p> <p>I just checked ONNX specification, the version 2 of the &quot;ai.onnx.ml.LabelEncoder&quot; OP could probably be used as a table lookup OP.</p>
<python><pytorch><onnx>
2025-08-21 09:06:49
0
1,106
user416983
79,741,635
6,162,679
How to automatically insert parentheses () when autocompleting functions in Python using Positron IDE?
<p>I am new to the Positron IDE and I'd like to automatically insert parentheses () when coding in Python. For example, when I type <code>len</code> and hit Enter to confirm, it does not automatically insert parentheses ().</p> <p>After searching online, it seems that in VS Code, there is a setting:</p> <p><code>&quot;python.analysis.completeFunctionParens&quot;: true</code></p> <p>However, this is not supported in the Positron IDE. I wonder how to achieve this feature?</p>
<python><autocomplete><positron>
2025-08-20 22:04:00
2
922
Yang Yang
79,741,568
2,648,504
Pandas - return the -2 row
<p>If I have an input.txt file:</p> <pre><code>apples grapes alpha pears chicago paris london yellow blue red +++++++++++++++++++++ apples grapes beta pears chicago paris london car truck van +++++++++++++++++++ apples grapes gamma pears chicago paris london white purple black +++++++++++++++++++ apples grapes delta pears chicago paris london car truck van </code></pre> <p>I want to find all rows containing <code>truck</code> as the 2nd string, then return the 3rd string from the row two lines above.</p> <p>Output would be:</p> <pre><code>beta delta </code></pre> <p>So far, I have this code that finds the row I'd like, then creates a dataframe from the list. What is the best way to continue using Pandas, and get the -2 row/value that I need?</p> <pre><code>data_list = [] with open('input.txt', 'r') as data: for line in data: split_row = line.split() if len(split_row) &gt; 1 and split_row[1] == &quot;truck&quot;: data_list.append(split_row) df = pd.DataFrame(data_list) print(df.to_string) </code></pre>
<python><pandas>
2025-08-20 20:40:26
3
881
yodish
79,741,525
145,682
pycryptodome decryption (aes-128 cbc) is yielding incorrect result
<p>I have simple code to encrypt and decrypt as follows...</p> <p>(1) To encrypt:</p> <pre class="lang-py prettyprint-override"><code>from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad from Crypto.Hash import SHA256 as sha256 def sha(text): _sha = sha256.new(text) return _sha.hexdigest() key = '1234' print('Key length:', len(key)) block_size = 16 plain_text = &quot;The quick brown fox jumped over the lazy dog&quot; print('Plain text length:', len(plain_text)) akey = pad(key.encode(), block_size) print('akey length', len(akey)) pt_encoded = plain_text.encode() cipher = AES.new(akey, AES.MODE_CBC) payload = pad(pt_encoded, block_size) print('sha payload:', sha(payload)) encrypted = cipher.encrypt(payload) # print(encrypted) print('Encrypted sha:', sha(encrypted)) with open('data.bin', 'wb') as f: f.write(encrypted) </code></pre> <p>(2) To Decrypt</p> <pre class="lang-py prettyprint-override"><code>cipher2 = AES.new(akey, AES.MODE_CBC) with open('data.bin', 'rb') as f: data = f.read() print('file contents sha:', sha(data)) decrypted = cipher2.decrypt(data) print('decrypted sha:', sha(decrypted)) plain = unpad(decrypted, block_size) print('Plain text:', plain.decode()) </code></pre> <p>Entire code and error in the gist: <a href="https://gist.github.com/deostroll/be4c6768b1e73b72fb0313e90345a0dd" rel="nofollow noreferrer">https://gist.github.com/deostroll/be4c6768b1e73b72fb0313e90345a0dd</a></p> <p>The observation is that after obtaining the decrypted contents and trying to decode, I get the following error:</p> <pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid start byte </code></pre> <p>Am I correctly trying to do the encryption/decryption?</p>
<python><cryptography><aes><pycryptodome>
2025-08-20 19:39:09
2
11,985
deostroll
79,741,492
1,324,833
resetting data limits on zoom in Matplotlib python program (long)
<p>I've created a program to display very high sample rate data in profile. There are 6 channels (mag[0..2] &amp; diff[0..2]) sampled at 30 kHz. I display them in 3 profiles, mag[0] with diff[0], etc. All plots share the same x axis (nptime), which is numpy timedate64, with different scales for the mag and diff. Axes 0 to 2 represent the mag profiles and 3 to 5 are diff.</p> <p>In order to speed up the display I slice the data to the window limits and a desampling rate of 1, 10 or 100, based on the x extents. I do this in a callback on the <code>xlim_changed</code> event for axis 0.</p> <p>It <em>almost</em> works. I call <code>set_ylim</code> within the callback using the data range for the windowed data. For the mag channels the y limits are set correctly for 2 of the 3 panels, but the panel I zoomed on (whichever that is) ends up displaying the zoom extents not the new extents. For the diff channels I've set up axes 3 to 5 to sharey, so I only need to set axis 3, but the set_ylim is ignored. It ALWAYS displays with the y extent from the zoom. I actually don't want the ylim for diff to change at all, I always set it to the same value.</p> <pre><code>def reset_data(self, ax): # the viewLim extents are decimal days, so some coordinate conversion is required width = ax.viewLim.width*86400 start=np.datetime64(pd.to_datetime(self.nptime[0]).date()) +np.timedelta64(int(ax.viewLim.min[0]%1*8.64e10),'us') idx0 = self.find_nearest(self.nptime, start) end=np.datetime64(pd.to_datetime(self.nptime[0]).date()) +np.timedelta64(int(ax.viewLim.max[0]%1*8.64e10),'us') idx1 = self.find_nearest(self.nptime[idx0:], end) + idx0 # the get_rate function returns 1 of 3 desample factors 1, 10 or 100 rate = self.get_rate(width) xdata = self.nptime[idx0:idx1:rate] for i in range(3): ydata = self.mag[i, idx0:idx1:rate] ymin = ydata.min() ymax = ydata.max() diff = (ymax - ymin)*.1 # self.lines is an array of the plots returned by self.ax[].plot() self.lines[i][0].set_data(xdata,ydata) self.ax[i].set_ylim(ymin-diff, ymax+diff) for i in range(3): ydata = self.diff[i, idx0:idx1:rate] self.lines[i+3][0].set_data(xdata,ydata) self.ax[3].set_ylim(self.min_diff, self.max_diff) </code></pre> <p>before zoom: <a href="https://i.sstatic.net/2TWUNQM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2TWUNQM6.png" alt="enter image description here" /></a></p> <p>after zoom: <a href="https://i.sstatic.net/8BOsaBTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8BOsaBTK.png" alt="enter image description here" /></a></p> <p>Note that the panel limits for the mag (blue) channels have been set to the data limit (+10%) for panels 2&amp;3, but the extents of the zoom box on panel 1. Also the pink trace is 7.5 to 27 on all 3 panels, which represents the zoom extents, not the limits I set.</p> <p>Sorry for the long post. I can't explain it any more briefly.</p>
<python><matplotlib>
2025-08-20 19:06:35
1
1,237
marcp
79,741,348
5,118,421
Mypy: Source file found twice under different module names
<p><a href="https://results.pre-commit.ci/run/github/37489525/1754726473.T4bKKoTUTfG-t4riT2_Kjg" rel="nofollow noreferrer">https://results.pre-commit.ci/run/github/37489525/1754726473.T4bKKoTUTfG-t4riT2_Kjg</a></p> <pre><code>Source file found twice under different module names: &quot;example_scripts.rewrite.src.main&quot; and &quot;testing.example_scripts.rewrite.src.main&quot; </code></pre> <p>I've already added <strong>init</strong> file to the package.</p>
<python><mypy>
2025-08-20 16:28:28
1
1,407
Irina
79,741,334
10,997,667
Populate folium TimestampedGeoJson features using lambda functions?
<p>I am following some code examples to plot time aware coordinates on a folium map using the <code>folium.plugins.TimestampedGeoJson</code> method. As in the example, I'm using a for-loop to tie coordinates and timestamps in a list of <code>GeoJson</code> features. In other work I have formatted styling of <code>GeoJson</code> feature collections using lambda functions like so</p> <pre><code>collection = folium.GeoJson( geoPandasDataframeObj, marker=folium.Circle(radius=8, fill_color=None, fill_opacity=1, color=None, weight=1), popup=folium.GeoJsonPopup(fields=[&quot;Field 1&quot;, &quot;Field 2&quot;]), style_function=lambda x: { &quot;fillColor&quot;: colormap(x['properties']['Field 1']), &quot;color&quot;: colormap(x['properties']['Field 2']), }, ) </code></pre> <p><strong>Is it possible to use a similar approach with <code>TimestampedGeoJson</code> to apply icon/marker styling using lambdas?</strong> The example code I'm using is here (<a href="https://python-visualization.github.io/folium/latest/user_guide/plugins/timestamped_geojson.html#" rel="nofollow noreferrer">Example Code</a>):</p> <pre><code>import folium from folium.plugins import TimestampedGeoJson m = folium.Map( location=[56.096555, -3.64746], tiles=&quot;cartodbpositron&quot;, zoom_start=5, ) table = &quot;&quot;&quot;\ &lt;table style=\'width:100%\'&gt; &lt;tr&gt; &lt;th&gt;Firstname&lt;/th&gt; &lt;th&gt;Lastname&lt;/th&gt; &lt;th&gt;Age&lt;/th&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Jill&lt;/td&gt; &lt;td&gt;Smith&lt;/td&gt; &lt;td&gt;50&lt;/td&gt; &lt;/tr&gt; &lt;tr&gt; &lt;td&gt;Eve&lt;/td&gt; &lt;td&gt;Jackson&lt;/td&gt; &lt;td&gt;94&lt;/td&gt; &lt;/tr&gt; &lt;/table&gt; &quot;&quot;&quot; points = [ { &quot;time&quot;: &quot;2017-06-02&quot;, &quot;popup&quot;: &quot;&lt;h1&gt;address1&lt;/h1&gt;&quot;, &quot;coordinates&quot;: [-2.548828, 51.467697], }, { &quot;time&quot;: &quot;2017-07-02&quot;, &quot;popup&quot;: &quot;&lt;h2 style='color:blue;'&gt;address2&lt;h2&gt;&quot;, &quot;coordinates&quot;: [-0.087891, 51.536086], }, { &quot;time&quot;: &quot;2017-08-02&quot;, &quot;popup&quot;: &quot;&lt;h2 style='color:orange;'&gt;address3&lt;h2&gt;&quot;, &quot;coordinates&quot;: [-6.240234, 53.383328], }, { &quot;time&quot;: &quot;2017-09-02&quot;, &quot;popup&quot;: &quot;&lt;h2 style='color:green;'&gt;address4&lt;h2&gt;&quot;, &quot;coordinates&quot;: [-1.40625, 60.261617], }, { &quot;time&quot;: &quot;2017-10-02&quot;, &quot;popup&quot;: table, &quot;coordinates&quot;: [-1.516113, 53.800651] }, ] times = [&quot;2017-06-02&quot;, &quot;2017-07-02&quot;, &quot;2017-08-02&quot;, &quot;2017-09-02&quot;, &quot;2017-10-02&quot;] coords = [(-2.548828, 51.467697), (-0.087891, 51.536086), (-6.240234, 53.383328), (-1.40625, 60.261617), (-1.516113, 53.800651)] colors = ['#e41a1c','#377eb8','#4daf4a','#984ea3','#ff7f00'] features = [] for point in range(0,len(times)): features.append({ &quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: { &quot;type&quot;: &quot;Point&quot;, &quot;coordinates&quot;: coords[point], }, &quot;properties&quot;: { &quot;time&quot;: times[point], &quot;icon&quot;: &quot;circle&quot;, &quot;iconstyle&quot;: {&quot;color&quot;: colors[point], &quot;fill&quot;: &quot;true&quot;, &quot;fillOpacity&quot;: 1.0, &quot;radius&quot;: 5}, }, }) folium.plugins.TimestampedGeoJson( { &quot;type&quot;: &quot;FeatureCollection&quot;, &quot;features&quot;: features, }, period=&quot;P7D&quot;, add_last_point=True, ).add_to(m) m.save('TimestampedGeoJsonPoint.html') </code></pre>
<python><leaflet><geojson><folium><folium-plugins>
2025-08-20 16:11:58
0
787
osprey
79,741,271
1,390,012
Google Chat App on Cloud Functions (2nd gen) – response fails
<p>I’m building a very simple Google Chat App on Cloud Functions (2nd gen) (Python). The app should just reply &quot;OK&quot; when I send a message from <code>mail.google.com/chat</code>.</p> <p>But in the logging error area always returns this error:</p> <blockquote> <p>Can't post a reply. The Chat app didn't respond or its response was invalid. If your Chat app is configured as an add-on, see &quot;Build Google Chat interfaces&quot; (<a href="https://developers.google.com/workspace/add-ons/chat/build" rel="nofollow noreferrer">https://developers.google.com/workspace/add-ons/chat/build</a>) in the Google Workspace add-ons documentation. Otherwise, see &quot;Receive and respond to Google Chat events&quot; (<a href="https://developers.google.com/chat/api/guides/message-formats" rel="nofollow noreferrer">https://developers.google.com/chat/api/guides/message-formats</a>) in the Chat API documentation.&quot;</p> </blockquote> <p>Python app</p> <pre><code>import functions_framework @functions_framework.http def mi_app_de_chat(request): event = request.get_json(silent=True) if event and event.get(&quot;type&quot;) == &quot;ADDED_TO_SPACE&quot;: return {&quot;text&quot;: &quot;Your Welcome&quot;} return {&quot;text&quot;: &quot;OK&quot;} </code></pre>
<python><google-cloud-functions><google-cloud-run><google-chat>
2025-08-20 15:16:31
2
699
mesompi
79,741,190
11,793,491
Reduce x newlines into x-1 newlines using regex
<p>I have this text: <code>&quot;&quot;Anna lives in Latin America.\n\nShe loves the vibes from the cities\n and the good weather.\n\n\nAnna is great&quot;</code></p> <p>And I want to reduce the x newlines into x-1 newlines. So the expected result is: <code>&quot;Anna lives in Latin America.\nShe loves the vibes from the cities and the good weather.\n\nAnna is great&quot;</code></p> <p>I tried this:</p> <pre class="lang-py prettyprint-override"><code>import re def clean_text(text): text = re.sub(r'\n{2,}', '\n', text) return text result = clean_text(&quot;Anna lives in Latin America.\n\nShe loves the vibes from the cities\n and the good weather.\n\n\nAnna is great&quot;) result </code></pre> <p>But it returns <code>&quot;Anna lives in Latin America.\nShe loves the vibes from the cities\n and the good weather.\nAnna is great&quot;</code>, which not what I expected.</p>
<python><regex>
2025-08-20 14:24:11
3
2,304
Alexis
79,741,096
9,217,084
Oauth client authorization fails because of the Google ADC
<p>I'm trying to work on my home project where I contact Google services like Gmail, Sheets, Drive. Services that are not Google Cloud per se.</p> <p>I've implemented GmailApi quickstart guide for python, but when I try to run I've got error about:</p> <pre><code>Traceback (most recent call last): File &quot;c:\Users\48513\A-i-tomations\Accounting\accounting-scripts\gmail_service.py&quot;, line 25, in &lt;module&gt; test(creds) File &quot;c:\Users\48513\A-i-tomations\Accounting\accounting-scripts\gmail_service.py&quot;, line 13, in test results = service.users().labels().list(userId=&quot;me&quot;).execute() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\googleapiclient\_helpers.py&quot;, line 130, in positional_wrapper return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\googleapiclient\http.py&quot;, line 923, in execute resp, content = _retry_request( ^^^^^^^^^^^^^^^ File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\googleapiclient\http.py&quot;, line 191, in _retry_request resp, content = http.request(uri, method, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\google_auth_httplib2.py&quot;, line 209, in request self.credentials.before_request(self._request, method, uri, request_headers) File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\google\auth\credentials.py&quot;, line 239, in before_request self._blocking_refresh(request) File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\google\auth\credentials.py&quot;, line 202, in _blocking_refresh self.refresh(request) File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\google\oauth2\credentials.py&quot;, line 409, in refresh ) = reauth.refresh_grant( ^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\google\oauth2\reauth.py&quot;, line 349, in refresh_grant raise exceptions.RefreshError( google.auth.exceptions.RefreshError: Reauthentication is needed. Please run `gcloud auth application-default login` to reauthenticate. </code></pre> <p>But first of all I assume it should not use ADC just use OAuth authentication. With credentials from credentials.json.</p> <p>Here is the code I used to retrieve credentials:</p> <pre><code>CLIENTSECRETS_LOCATION = './credentials/credentials.json' TOKEN_LOCATION = './credentials/token.json' SCOPES = ['https://www.googleapis.com/auth/gmail.readonly'] def readCredentials(): creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists(TOKEN_LOCATION): print(&quot;Loading existing credentials from token.json&quot;) creds = Credentials.from_authorized_user_file(TOKEN_LOCATION, SCOPES) # If there are no (valid) credentials available, let the user log in. print(&quot;Checking credentials validity...&quot;) if not creds or not creds.valid: print(&quot;Credentials are not valid or expired.&quot;) if creds and creds.expired and creds.refresh_token: print(&quot;Refreshing expired credentials...&quot;) creds.refresh(Request()) else: print(&quot;No valid credentials available, initiating login flow.&quot;) flow = InstalledAppFlow.from_client_secrets_file(CLIENTSECRETS_LOCATION, SCOPES) creds = flow.run_local_server(port=8080) # Save the credentials for the next run with open(TOKEN_LOCATION, &quot;w&quot;) as token: token.write(creds.to_json()) </code></pre> <p>And the code invoking gmail API:</p> <pre><code>def test(creds: Credentials): # Call the Gmail API service: Resource = build(&quot;gmail&quot;, &quot;v1&quot;, credentials=creds) results = service.users().labels().list(userId=&quot;me&quot;).execute() labels = results.get(&quot;labels&quot;, []) if not labels: print(&quot;No labels found.&quot;) return print(&quot;Labels:&quot;) for label in labels: print(label[&quot;name&quot;]) if __name__ == &quot;__main__&quot;: creds = readCredentials() test(creds) </code></pre> <p>I tried to go with the error message and tried to perform <code>gcloud auth application-default login</code> with my personal account not companies managed, but then I've got this error:</p> <pre><code>Traceback (most recent call last): File &quot;c:\Users\48513\A-i-tomations\Accounting\accounting-scripts\gmail_service.py&quot;, line 25, in &lt;module&gt; test(creds) File &quot;c:\Users\48513\A-i-tomations\Accounting\accounting-scripts\gmail_service.py&quot;, line 13, in test results = service.users().labels().list(userId=&quot;me&quot;).execute() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\googleapiclient\_helpers.py&quot;, line 130, in positional_wrapper return wrapped(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\48513\A-i-tomations\Accounting\accounting-scripts\Lib\site-packages\googleapiclient\http.py&quot;, line 938, in execute raise HttpError(resp, content, uri=self.uri) googleapiclient.errors.HttpError: &lt;HttpError 403 when requesting https://gmail.googleapis.com/gmail/v1/users/me/labels?alt=json returned &quot;Request had insufficient authentication scopes.&quot;. Details: &quot;[{'message': 'Insufficient Permission', 'domain': 'global', 'reason': 'insufficientPermissions'}]&quot;&gt; </code></pre>
<python><google-cloud-platform><google-oauth><gcloud>
2025-08-20 13:05:00
0
451
Kacper
79,741,057
9,715,816
Django connection_created signal is causing problems when testing
<p>In my django application I have a list of notification types and I want to allow customers to subscribe to one or more notification types.</p> <p>Each notification type has somewhat of a custom logic so the code of each notification has to be in a different class but I have created a singleton class that gathers all the notification type names and other metadata like description etc.</p> <p>I want to have a list in the database of all the supported notification types so that the relationship between customers and notification types can be stored in the database while customers subscribe to notification types. I want to have a notification type table so that I can store the metadata and a separate table to store the many-to-many relationship between customers and notifications.</p> <p>That is where <code>connection_created</code> signal comes in. I have created the following signal that creates the notification type items in the database when the <code>connection_created</code> signal is received so they get auto-updated when I am changing the code:</p> <pre class="lang-py prettyprint-override"><code>from django.db.backends.signals import connection_created from django.db.backends.sqlite3.base import ( DatabaseWrapper as SQLiteDatabaseWrapper, ) from django.db.backends.postgresql.base import ( DatabaseWrapper as PostgresDatabaseWrapper, ) from django.dispatch import receiver from notification_type_singleton import NotificationTypeSingleton from models import NotificationType @receiver(connection_created, sender=SQLiteDatabaseWrapper) @receiver(connection_created, sender=PostgresDatabaseWrapper) def create_or_update_notification_type(sender, **kwargs): exiting_ids = [] for _, notification_type in ( NotificationTypeSingleton._data.items() ): notification, _ = NotificationType.objects.update_or_create( name=notification_type.name, defaults={ 'description': notification_type.description, 'is_active': True, }, ) exiting_ids.append(notification.id) # Deactivate all notifications in the database that are not used NotificationType.objects.exclude(id__in=exiting_ids).update(is_active=False) # Update the registry with the created events NotificationTypeSingleton._registry = { notification.name: notification for notification in NotificationType.objects.filter(is_active=True) } </code></pre> <p>That seems to work fine when I bring up my application with <code>python manage.py runserver</code> but when I test with <code>pytest</code> and postgres, the signal is raised as expected but I get <code>RuntimeWarning: Normally Django will use a connection to the 'postgres' database to avoid running initialization queries against the production database when it's not needed (for example, when running tests). Django was unable to create a connection to the 'postgres' database and will use the first PostgreSQL database instead.</code> and if I comment out the code that is doing the queries in the signal then the error goes away.</p> <p>At the same time according to the <a href="https://docs.djangoproject.com/en/5.2/ref/signals/#connection-created" rel="nofollow noreferrer">docs</a> the <code>connection_created</code> signal should be used for</p> <blockquote> <p>...any post connection commands to the SQL backend.</p> </blockquote> <p>so I think I am in the intended use cases but it seems that there are side effects? Does anybody else have had similar problems with testing?</p>
<python><django><django-signals><pytest-django>
2025-08-20 12:15:26
1
2,019
Charalamm
79,740,944
5,722,359
How to fix Gtk.FileChooserDialogue height and resizing issues?
<p>I am experiencing a strange phenomenon with the <code>Gtk.FileChooserDialogue</code> widget (Gtk3).</p> <ol> <li><p>I can't get it to appear at the correct height. Output state height is 500px but its height is definitely much larger. Screen height is 1080px. This widget is almost reaching to the bottom of screen.</p> </li> <li><p>When manually resizing it with the mouse pointer, it uncontrollably shrunk to a very small size, get stuck to the top of the screen and I think it is just showing one of its child widget instead of the entire widget.</p> </li> </ol> <p>These issues are shown below. What am I doing wrong and how to fix them?</p> <p>Running on Ubuntu 24.04.3. GNOME shell version is 46.</p> <p><strong>Video of Issues:</strong></p> <p><a href="https://i.sstatic.net/TTVQLoJj.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TTVQLoJj.gif" alt="Issues" /></a></p> <p><strong>Sample Code:</strong></p> <pre><code>import sys import gi gi.require_version(&quot;Gtk&quot;, &quot;3.0&quot;) from gi.repository import GLib, Gtk # noqa: E402 class Chooser(Gtk.FileChooserDialog): def __init__(self, title=None, parent=None): super().__init__( title=title, parent=parent, action=Gtk.FileChooserAction.OPEN, default_height=500, # I tried initializing its height to 500px ) self.set_default_size(-1, 500) # I tried setting automatic height to 500px self.set_size_request(-1, 300) # I tried setting minimum height to 300px self.add_buttons( Gtk.STOCK_CANCEL, Gtk.ResponseType.CANCEL, Gtk.STOCK_OPEN, Gtk.ResponseType.OK, ) print(f&quot;{self.props.resizable=}&quot;) print(f&quot;{self.props.resize_mode=}&quot;) print(f&quot;{self.props.default_height=}&quot;) self.add_filters() self.run_self() def add_filters(self): filter_txt = Gtk.FileFilter() filter_txt.set_name(&quot;.txt files&quot;) filter_txt.add_mime_type(&quot;text&quot;) self.add_filter(filter_txt) def run_self(self): response = self.run() self.destroy() class App(Gtk.Application): def __init__(self): super().__init__(application_id=&quot;com.gnome.test&quot;) GLib.set_application_name(&quot;test&quot;) def do_activate(self): self.window = Gtk.ApplicationWindow(application=self, title=&quot;App&quot;) # Create a grid container self.grid = Gtk.Grid() self.window.add(self.grid) # Create Buttons self.btn = Gtk.Button(label=&quot;From&quot;) # Display buttons self.grid.attach(self.btn, 0, 0, 1, 1) self.grid.set_row_homogeneous(True) self.grid.set_column_homogeneous(True) # Connect Button to handler self.btn.connect(&quot;clicked&quot;, self.get_file) # Set Size self.window.set_default_size(width=500, height=34) # Show all widgets self.window.show_all() def get_file(self, widget): print(&quot;Chooser&quot;) chooser = Chooser(title=&quot;Chooser&quot;, parent=self.window) chooser.set_default_size(-1, 400) app = App() exit_status = app.run(sys.argv) sys.exit(exit_status) </code></pre> <p><strong>VS Code Output:</strong></p> <pre><code>Chooser self.props.resizable=True self.props.resize_mode=&lt;enum GTK_RESIZE_QUEUE of type Gtk.ResizeMode&gt; self.props.default_height=500 </code></pre> <p><strong>GTK Info:</strong></p> <pre><code>$ dpkg -l | grep GTK ii apport-gtk 2.28.1-0ubuntu3.8 all GTK+ frontend for the apport crash report system ii gir1.2-gnomebg-4.0:amd64 44.0-5build2 amd64 Introspection data for GnomeBG (GTK 4) ii gir1.2-gnomedesktop-3.0:amd64 44.0-5build2 amd64 Introspection data for GnomeDesktop (GTK 3) ii gir1.2-gnomedesktop-4.0:amd64 44.0-5build2 amd64 Introspection data for GnomeDesktop (GTK 4) ii gir1.2-gtk-3.0:amd64 3.24.41-4ubuntu1.3 amd64 GTK graphical user interface library -- gir bindings ii gir1.2-gtk-4.0:amd64 4.14.5+ds-0ubuntu0.4 amd64 GTK graphical user interface library -- gir bindings ii gir1.2-gtksource-5:amd64 5.12.0-1build1 amd64 gir files for the GTK+ syntax highlighting widget ii gir1.2-javascriptcoregtk-4.1:amd64 2.48.5-0ubuntu0.24.04.1 amd64 JavaScript engine library from WebKitGTK - GObject introspection data ii gir1.2-javascriptcoregtk-6.0:amd64 2.48.5-0ubuntu0.24.04.1 amd64 JavaScript engine library from WebKitGTK - GObject introspection data ii gir1.2-webkit-6.0:amd64 2.48.5-0ubuntu0.24.04.1 amd64 Web content engine library for GTK - GObject introspection data ii gir1.2-webkit2-4.1:amd64 2.48.5-0ubuntu0.24.04.1 amd64 Web content engine library for GTK - GObject introspection data ii gnome-accessibility-themes 3.28-2ubuntu5 all High Contrast GTK 2 theme and icons ii gnome-themes-extra:amd64 3.28-2ubuntu5 amd64 Adwaita GTK 2 theme — engine ii gnome-themes-extra-data 3.28-2ubuntu5 all Adwaita GTK 2 theme and Adwaita-dark GTK 3 theme — common files ii gstreamer1.0-gtk3:amd64 1.24.2-1ubuntu1.1 amd64 GStreamer plugin for GTK+3 ii gtk2-engines-pixbuf:amd64 2.24.33-4ubuntu1.1 amd64 pixbuf-based theme for GTK 2 ii ibus-gtk:amd64 1.5.29-2 amd64 Intelligent Input Bus - GTK2 support ii ibus-gtk3:amd64 1.5.29-2 amd64 Intelligent Input Bus - GTK3 support ii ibus-gtk4:amd64 1.5.29-2 amd64 Intelligent Input Bus - GTK4 support ii libadwaita-1-0:amd64 1.5.0-1ubuntu2 amd64 Library with GTK widgets for mobile phones ii libavahi-ui-gtk3-0:amd64 0.8-13ubuntu6 amd64 Avahi GTK+ User interface library for GTK3 ii libayatana-appindicator3-1 0.5.93-1build3 amd64 Ayatana Application Indicators (GTK-3+ version) ii libayatana-indicator3-7:amd64 0.9.4-1build1 amd64 panel indicator applet - shared library (GTK-3+ variant) ii libcanberra-gtk-module:amd64 0.30-10ubuntu10 amd64 translates GTK+ widgets signals to event sounds ii libcanberra-gtk0:amd64 0.30-10ubuntu10 amd64 GTK+ helper for playing widget event sounds with libcanberra ii libcanberra-gtk3-0t64:amd64 0.30-10ubuntu10 amd64 GTK+ 3.0 helper for playing widget event sounds with libcanberra ii libcanberra-gtk3-module:amd64 0.30-10ubuntu10 amd64 translates GTK3 widgets signals to event sounds ii libcolord-gtk4-1t64:amd64 0.3.1-1build2 amd64 GTK4 convenience library for interacting with colord ii libdbusmenu-gtk3-4:amd64 18.10.20180917~bzr492+repack1-3.1ubuntu5 amd64 library for passing menus over DBus - GTK-3+ version ii libdecor-0-plugin-1-gtk:amd64 0.2.2-1build2 amd64 libdecor decoration plugin using GTK ii libedataserverui4-1.0-0t64:amd64 3.52.3-0ubuntu1 amd64 GTK4 utility library for evolution data servers ii libgnome-desktop-3-20t64:amd64 44.0-5build2 amd64 Utility library for the GNOME desktop - GTK 3 version ii libgspell-1-2:amd64 1.12.2-1build4 amd64 spell-checking library for GTK+ applications ii libgtk-3-0t64:amd64 3.24.41-4ubuntu1.3 amd64 GTK graphical user interface library ii libgtk-3-bin 3.24.41-4ubuntu1.3 amd64 programs for the GTK graphical user interface library ii libgtk-3-common 3.24.41-4ubuntu1.3 all common files for the GTK graphical user interface library ii libgtk-4-1:amd64 4.14.5+ds-0ubuntu0.4 amd64 GTK graphical user interface library ii libgtk-4-bin 4.14.5+ds-0ubuntu0.4 amd64 programs for the GTK graphical user interface library ii libgtk-4-common 4.14.5+ds-0ubuntu0.4 all common files for the GTK graphical user interface library ii libgtk-4-dev:amd64 4.14.5+ds-0ubuntu0.4 amd64 development files for the GTK library ii libgtk-4-media-gstreamer 4.14.5+ds-0ubuntu0.4 amd64 GStreamer media backend for the GTK graphical user interface library ii libgtk2.0-0t64:amd64 2.24.33-4ubuntu1.1 amd64 GTK graphical user interface library - old version ii libgtk2.0-bin 2.24.33-4ubuntu1.1 amd64 programs for the GTK graphical user interface library ii libgtk2.0-common 2.24.33-4ubuntu1.1 all common files for the GTK graphical user interface library ii libgtk3-perl 0.038-3 all Perl bindings for the GTK+ graphical user interface library ii libgtkmm-4.0-0:amd64 4.10.0-4build3 amd64 C++ wrappers for GTK4 (shared libraries) ii libgtkmm-4.0-dev:amd64 4.10.0-4build3 amd64 C++ wrappers for GTK4 (development files) ii libgtksourceview-5-0:amd64 5.12.0-1build1 amd64 shared libraries for the GTK 4 syntax highlighting widget ii libgtksourceview-5-common 5.12.0-1build1 all common files for the GTK 4 syntax highlighting widget ii libhandy-1-0:amd64 1.8.3-1build2 amd64 Library with GTK widgets for mobile phones ii libjavascriptcoregtk-4.1-0:amd64 2.48.5-0ubuntu0.24.04.1 amd64 JavaScript engine library from WebKitGTK ii libjavascriptcoregtk-6.0-1:amd64 2.48.5-0ubuntu0.24.04.1 amd64 JavaScript engine library from WebKitGTK ii libnma-gtk4-0:amd64 1.10.6-3build2 amd64 NetworkManager GUI GTK4 library ii libpanel-1-1:amd64 1.6.0-1build1 amd64 IDE paneling library for GTK ii libpanel-common 1.6.0-1build1 all IDE paneling library for GTK - common files ii libportal-gtk3-1:amd64 0.7.1-5build5 amd64 Flatpak portal library for GTK 3 GUIs ii libportal-gtk4-1:amd64 0.7.1-5build5 amd64 Flatpak portal library for GTK 4 GUIs ii libreoffice-gtk3 4:24.2.7-0ubuntu0.24.04.4 amd64 office productivity suite -- GTK+ 3 integration ii libtext-engine-0.1-0 0.1.1-4build2 amd64 Rich-text editing framework for GTK 4 ii libvte-2.91-0:amd64 0.76.0-1ubuntu0.1 amd64 Terminal emulator widget for GTK+ 3.0 - runtime files ii libvte-2.91-common 0.76.0-1ubuntu0.1 amd64 Terminal emulator widget for GTK+ 3.0 - common files ii libvte-2.91-gtk4-0:amd64 0.76.0-1ubuntu0.1 amd64 Terminal emulator widget for GTK 4 - runtime files ii libwebkit2gtk-4.1-0:amd64 2.48.5-0ubuntu0.24.04.1 amd64 Web content engine library for GTK ii libwebkitgtk-6.0-4:amd64 2.48.5-0ubuntu0.24.04.1 amd64 Web content engine library for GTK ii libwmf-0.2-7-gtk:amd64 0.2.13-1.1build3 amd64 Windows metafile conversion GTK pixbuf plugin ii libwmf0.2-7-gtk:amd64 0.2.13-1.1build3 amd64 Windows metafile conversion GTK pixbuf plugin - transitional package ii libwxgtk3.2-1t64:amd64 3.2.4+dfsg-4build1 amd64 wxWidgets Cross-platform C++ GUI toolkit (GTK 3 runtime) ii python3-aptdaemon.gtk3widgets 1.1.1+bzr982-0ubuntu44 all Python 3 GTK+ 3 widgets to run an aptdaemon client ii qt5-gtk-platformtheme:amd64 5.15.13+dfsg-1ubuntu1 amd64 Qt 5 GTK+ 3 platform theme ii qt6-gtk-platformtheme:amd64 6.4.2+dfsg-21.1build5 amd64 Qt 6 GTK+ 3 platform theme ii remmina 1.4.35+dfsg-0ubuntu5.1 amd64 GTK+ Remote Desktop Client ii transmission-gtk 4.0.5-1build5 amd64 lightweight BitTorrent client (GTK+ interface) ii xdg-desktop-portal-gtk 1.15.1-1build2 amd64 GTK+/GNOME portal backend for xdg-desktop-portal ii yaru-theme-gtk 24.04.2-0ubuntu1 all Yaru GTK theme from the Ubuntu Community </code></pre>
<python><gtk><gtk3>
2025-08-20 10:19:36
1
8,499
Sun Bear
79,740,866
9,072,753
How to type annotate a "unique" function?
<p>I want to make a small alias for <code>sorted(list(set(...)))</code>. I do:</p> <pre><code>from typing import Iterable, TypeVar H = TypeVar(&quot;H&quot;) def unique(x: Iterable[H]) -&gt; list[H]: return sorted(list(set(x))) unique(a for a in [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]) </code></pre> <p>but this fails with:</p> <pre class="lang-none prettyprint-override"><code>main.py:29: error: Value of type variable &quot;SupportsRichComparisonT&quot; of &quot;sorted&quot; cannot be &quot;H&quot; [type-var] Found 1 error in 1 file (checked 1 source file) </code></pre> <p>Ok, so then I do:</p> <pre><code>from abc import ABCMeta, abstractmethod from typing import Any, Iterable, TypeVar class ComparableHashable(metaclass=ABCMeta): @abstractmethod def __lt__(self, other: Any) -&gt; bool: ... @abstractmethod def __hash__(self) -&gt; int: pass H = TypeVar(&quot;H&quot;, bound=ComparableHashable) def unique(x: Iterable[H]) -&gt; list[H]: return sorted(list(set(x))) unique(a for a in [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]) </code></pre> <p>but this fails with:</p> <pre class="lang-none prettyprint-override"><code>main.py:31: error: Value of type variable &quot;H&quot; of &quot;unique&quot; cannot be &quot;str&quot; [type-var] Found 1 error in 1 file (checked 1 source file) </code></pre> <p>How to do it?</p>
<python><python-typing><mypy>
2025-08-20 09:09:38
0
145,478
KamilCuk
79,740,816
270,043
Faster way to filter for matching records between 2 PySpark dataframes
<p>I'm trying to write a PySpark program that filters for records in a very large dataframe (700M to 1B records) that matches some conditions on another smaller reference dataframe (450K records). This is done using a left join between the 2 dataframes, and then writing the results to a parquet file. However, I'm facing issues getting the PySpark program to run successfully.</p> <ul> <li>When the reference database is empty, the program completes successfully</li> <li>The program seems to hang at some point (for hours) when I try the following <ul> <li>When the reference dataframe has all 450K</li> <li>When I split the reference dataframe to 5 or 10 chunks against the entire large dataframe</li> <li>When I split the reference dataframe to 5 chunks, and the large dataframe to 10 chunks</li> </ul> </li> </ul> <p>Splitting both the reference dataframe and large dataframe seem to get further than the other tries. I looked at the output folder, there is a parquet file buried deep in it (<code>&lt;output_folder&gt;/_temporary/0/_temporary/attempt_&lt;long string of numbers prefixed by date&gt;/part-00000-....snappy.parquet</code>). However, this file is 0 byte.</p> <p>My code (without the splitting) is as follows.</p> <pre><code>def extract_to_df(spark, ref_db): columns_to_drop = [&quot;ColA&quot;, &quot;ColB&quot;, &quot;ColC&quot;] # Join conditions join_cond_1 = (col(&quot;Col1&quot;) &gt;= col(&quot;Col3a&quot;)) &amp; (col(&quot;Col1&quot;) &gt;= col(&quot;Col3b&quot;)) join_cond_2 = (col(&quot;Col2&quot;) &gt;= col(&quot;Col3a&quot;)) &amp; (col(&quot;Col2&quot;) &gt;= col(&quot;Col3b&quot;)) df = spark.read.parquet(folder) df_2 = df.filter(df[&quot;Col4&quot;]==&quot;abc&quot;).withColumn(&quot;Col1&quot;, udf_col(col(&quot;Col1a&quot;))).withColumn(&quot;Col2&quot;, udf_col(col(&quot;Col2a&quot;))) df_tmp = df_2.join(ref_db, on=join_cond_1, how=&quot;left&quot;).drop(*columns_to_drop).withColumnRenamed(&quot;Col5&quot;, &quot;Col5a&quot;) df_results = df_tmp.join(ref_db, on=join_cond_2, how=&quot;left&quot;).drop(*columns_to_drop).withColumnRenamed(&quot;Col6&quot;, &quot;Col6a&quot;) df_final_results = df_results.dropna(subset=[&quot;Col5a&quot;, &quot;Col6a&quot;]) df_final_results.write.mode(&quot;append&quot;).parquet(output_folder) def main(): ref_db = spark.read.parquet(&quot;/ref_db.parquet&quot;) extract_to_df(spark, ref_db) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>Maybe this isn't the most efficient way to do what I want. Is there a faster way to do this than do 2 joins?</p>
<python><dataframe><pyspark>
2025-08-20 08:27:57
0
15,187
Rayne
79,740,656
1,581,090
How to fix pyenv on windows 11?
<p>On windows 11 I use pyenv to be able to select a specific python version. I want to select python 3.11.9, and the output of</p> <pre><code>pyenv versions </code></pre> <p>is:</p> <pre><code> 3.10.11 3.11.8 * 3.11.9 (set by C:\Users\WORK\.pyenv\pyenv-win\version) </code></pre> <p>However, the installed python version is 3.10.11:</p> <pre><code>&gt; python --version Python 3.10.11 </code></pre> <p>How to fix this issue?</p>
<python><windows-11><pyenv>
2025-08-20 05:40:06
1
45,023
Alex
79,740,643
11,082,866
Why does MQTT subscription adds a lag in data streaming after using clean session
<p>I have an RFID reader which is connected to my code via MQTT. I want to design the system in such a way that the user should have a Start API and a Stop API and an API which receives the data and transform it to make data readable.</p> <p>Now the reader keeps on sending the data to a topic but my code connects and subscribe to it when start is triggered but upon subscription I have to wait around 30 seconds to get realtime data, for those 30 seconds I am getting the previously scanned tags.</p> <pre><code>IST = timezone(timedelta(hours=5, minutes=30)) # MQTT config MQTT_BROKER = &quot;xxx.xx.xx&quot; MQTT_PORT = 1883 MQTT_TOPIC = &quot;xyz&quot; client = mqtt.Client(clean_session=True) is_mqtt_connected = False # mqtt_handler.py scan_state = { &quot;is_active&quot;: False, &quot;start_time&quot;: None, &quot;accepted_tags&quot;: {}, } current_session_id = None def set_current_session_id(session_id): global current_session_id current_session_id = session_id def get_current_session_id(): return current_session_id def clear_retained_message(): # publish empty retained message → clears broker memory client.publish(MQTT_TOPIC, payload=None, qos=1, retain=True) def on_connect(client, userdata, flags, rc): print(f&quot;[MQTT] Connected with result code {rc}&quot;) client.subscribe(MQTT_TOPIC, qos=1) def on_message(client, userdata, msg): try: data = json.loads(msg.payload.decode()) epc = data['tagInventoryEvent']['epcHex'] timestamp_str = data['timestamp'] timestamp = datetime.fromisoformat(timestamp_str.replace(&quot;Z&quot;, &quot;+00:00&quot;)) if timestamp.tzinfo is None: timestamp = make_aware(timestamp, timezone=pytz.UTC) else: timestamp = timestamp.astimezone(pytz.UTC) session = ScanSession.objects.filter(is_active=True).order_by(&quot;-start_time&quot;).first() if session: start_time = session.start_time if start_time.tzinfo is None: start_time = make_aware(start_time, timezone=pytz.UTC) else: start_time = start_time.astimezone(pytz.UTC) print(f&quot;[MQTT] Start time: {start_time}&quot;) if timestamp &gt;= start_time: print(f&quot;Accepted tag: {epc} at {timestamp.isoformat()} for session {session.id}&quot;) ScannedTag.objects.create(epc=epc, timestamp=timestamp, scan_session=session) else: print(f&quot;Ignored tag before session start: {epc} at {timestamp}&quot;) else: print(f&quot;Ignored tag, no active scan session: {epc} at {timestamp}&quot;) except Exception as e: print(f&quot; Error in on_message: {e}&quot;) def start_mqtt(): global is_mqtt_connected client.on_connect = on_connect client.on_message = on_message # if not is_mqtt_connected: print(&quot;Connecting MQTT...&quot;) client.connect(MQTT_BROKER, MQTT_PORT, 60) clear_retained_message() # clear stale tag before listening client.loop_start() is_mqtt_connected = True print(&quot;[MQTT] Loop started and subscribed.&quot;) # else: # print(&quot;[MQTT] Already connected.&quot;) </code></pre> <p>Now the sample result that I get after starting looks like this:</p> <pre><code>[MQTT] Start time: 2025-08-18 07:35:49.246005+00:00 [⛔️] Ignored tag before session start: 504C313235342F3030313138 at 2025-08-18 07:35:22.421264+00:00 [MQTT] Start time: 2025-08-18 07:35:49.246005+00:00 [⛔️] Ignored tag before session start: 504C313235342F3030313236 at 2025-08-18 07:35:22.421628+00:00 [MQTT] Start time: 2025-08-18 07:35:49.246005+00:00 [⛔️] Ignored tag before session start: 504C313235342F3030313132 at 2025-08-18 07:35:22.671264+00:00 [MQTT] Start time: 2025-08-18 07:35:49.246005+00:00 [⛔️] Ignored tag before session start: 504C313235342F3030313231 at 2025-08-18 07:35:22.921329+00:00 [MQTT] Start time: 2025-08-18 07:35:49.246005+00:00 </code></pre> <p>When the start time is at 49 second and the session is clean then why i am getting tags read at 22 second?</p>
<python><mqtt>
2025-08-20 05:20:44
0
2,506
Rahul Sharma
79,740,626
13,352,657
How to generate a typed Python SDK for a GraphQL API
<p>I'm trying to set up a nice Python client for a GraphQL API managed by a separate team (in a different language): We want to provide useful type hints + autocomplete without introducing overly onerous maintenance requirements for our Python layer.</p> <p>I saw that <code>gql</code> looks popular and <a href="https://gql.readthedocs.io/en/latest/advanced/dsl_module.html" rel="nofollow noreferrer">offers a Python DSL</a> for building queries against schemas... So was thinking to take that and:</p> <ol> <li>Use introspection (e.g. <a href="https://graphql-core-3.readthedocs.io/en/stable/usage/introspection.html" rel="nofollow noreferrer">here in graphql-core</a>) on the endpoint to fetch a GraphQL Schema Definition Language description</li> <li>Build that schema (or some transformation of it) into our Python library in an automated pipeline, so we can easily release new versions if/when the schema updates.</li> <li>Ideally still offer flexibility to escape from the typing system if the library goes stale and there's an urgent need</li> </ol> <p>But just storing the SDL is not enough, because building the gql DSLSchema from this is still dynamic so (at least my, VSCode) IDE has no idea what's going on:</p> <pre class="lang-py prettyprint-override"><code>from gql.dsl import DSLQuery, DSLSchema, dsl_gql from graphql import build_schema # (File saved with `graphql.print_schema()`) with open(&quot;schema.gql&quot;) as f: ds = DSLSchema(build_schema(f.read())) # ⚠️ IDE autocomplete's not aware of `.me`, `User`, or `.id`: query = dsl_gql( DSLQuery( ds.Query.me.select(ds.User.id) ) ) </code></pre> <p>Is there some common way to tackle this that I'm missing? Or a code generation tool to go from GraphQL SDL to <code>gql.dsl.DSLSchema</code>? We're not necessarily tied to <code>gql</code> yet if another library makes more sense (<a href="https://github.com/profusion/sgqlc" rel="nofollow noreferrer">sgqlc</a>? <a href="https://github.com/mirumee/ariadne-codegen" rel="nofollow noreferrer">ariadne-codegen</a>?)</p>
<python><graphql><ariadne-graphql>
2025-08-20 04:48:55
0
1,069
dingus
79,740,460
1,604,008
Trying to run selenium in linux but can't find driver
<pre><code>-rwxr-xr-x 1 kyle kyle 6132584 Feb 24 09:27 /usr/local/bin/geckodriver echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/kyle/.dotnet/tools </code></pre> <p>It is my understanding selenium should be able to find the driver automatically. That does not seem to work for me.</p> <pre><code>pip show selenium Name: selenium Version: 4.18.1 </code></pre> <p>If I try and run my python script with:</p> <pre><code>driver = webdriver.Firefox() </code></pre> <p>I get</p> <pre><code>Message: Unable to obtain driver for firefox using Selenium Manager. </code></pre> <p>If I use</p> <pre><code>driver = webdriver.Firefox(&quot;/usr/local/bin/geckodriver&quot;) </code></pre> <pre><code>Traceback (most recent call last): File &quot;/usr/lib/python3/dist-packages/selenium/webdriver/common/driver_finder.py&quot;, line 38, in get_path path = SeleniumManager().driver_location(options) if path is None else path ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/selenium/webdriver/common/selenium_manager.py&quot;, line 87, in driver_location browser = options.capabilities[&quot;browserName&quot;] ^^^^^^^^^^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'capabilities' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/home/kyle/Documents/ElecticityUsage.py&quot;, line 181, in &lt;module&gt; driver = webdriver.Firefox(&quot;/usr/local/bin/geckodriver&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/selenium/webdriver/firefox/webdriver.py&quot;, line 59, in __init__ self.service.path = DriverFinder.get_path(self.service, options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3/dist-packages/selenium/webdriver/common/driver_finder.py&quot;, line 40, in get_path msg = f&quot;Unable to obtain driver for {options.capabilities['browserName']} using Selenium Manager.&quot; ^^^^^^^^^^^^^^^^^^^^ AttributeError: 'str' object has no attribute 'capabilities' </code></pre>
<python><selenium-webdriver><geckodriver><selenium4><seleniummanager>
2025-08-19 22:35:30
2
1,159
user1604008
79,740,439
894,827
Deploying a python web app on Azure the page isnt displaying
<p>I appreciate that this isn't a reproducable issue because of the fact that I cannot place the code in the public domain, I have a python app that I am trying to deploy on an Azure web app, I have followed the instructions on the readme page and changed a few things within the configuration settings of the app.</p> <p>Then ran the following.</p> <p><code>az login</code></p> <p>Changed the configuration settings for the app, then zipped all the files together into a zip file.</p> <p>Navigated to the folder which contains the deployment zip file and ran the command below.</p> <pre><code>az webapp deploy --resource-group 'my_rg' --name 'my-app-name' --src-path 'C:\projects\app\deployment_package.zip' </code></pre> <p>I checked the deployment centre logs within app services.</p> <pre><code>17/08/2025, 06:52:40 PM Updating submodules. 17/08/2025, 06:52:41 PM Preparing deployment for commit id '0f38838c-e'. 17/08/2025, 06:52:42 PM PreDeployment: context.CleanOutputPath False 17/08/2025, 06:52:42 PM PreDeployment: context.OutputPath /home/site/wwwroot 17/08/2025, 06:52:42 PM Skipping build. Project type: Run-From-Zip 17/08/2025, 06:52:42 PM Skipping post build. Project type: Run-From-Zip 17/08/2025, 06:52:42 PM Triggering container recycle for OneDeploy by adding/updating restartTrigger.txt to the site root path 17/08/2025, 06:52:42 PM Updating /home/data/SitePackages/packagename.txt with deployment 20250817175236.zip 17/08/2025, 06:52:42 PM Deployment successful. deployer = OneDeploy deploymentPath = OneDeploy </code></pre> <p>The contents of the deployment package</p> <p><a href="https://i.sstatic.net/c64fCzgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c64fCzgY.png" alt="Deployment package contents" /></a></p> <p>The deployed application, see the section at hte bottom where it says should I be expecting to see an app, and the answer is Yes but I dont know why its not working.</p> <p><a href="https://i.sstatic.net/cwhJQxKg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwhJQxKg.png" alt="enter image description here" /></a></p>
<python><azure-web-app-service>
2025-08-19 22:02:22
0
1,099
learner
79,740,398
46,521
polars streaming: downsample & write parquet based on shift(-1)
<p>I'm trying to downsample a large parquet file with polars.</p> <p>Does polars support the following workflow in a streaming manner?</p> <p>As written, it uses &gt;60GB of RAM. It should be easy to achieve in O(1) memory:</p> <pre><code>import os ; os.environ['POLARS_MAX_THREADS'] = '4' import polars as pl import time, random import numpy as np random.seed(42) N_TICKS = 100_000 N_TICKERS = 10_000 T0 = 1755634720560000000 def generate_fake_example_data(): tmp = [] for t in range(N_TICKERS): print(t,end=',') tmp.append(pl.DataFrame({ &quot;ticker&quot;: [f&quot;ticker{t}&quot;] * N_TICKS, &quot;epoch_nanos&quot;: T0 + np.cumsum(np.random.randint(1e7, 1e10, size=N_TICKS)), &quot;price&quot;: np.round(np.random.uniform(100, 400, size=N_TICKS), 2), })) data = pl.concat(tmp) print(f&quot;{len(data)=}&quot;) data.write_parquet(&quot;example_input.parquet&quot;) generate_fake_example_data() !ls -lah example_input.parquet print(pl.__version__) DOWNSAMPLE_NANOS = int(1e11) # RAM usage spikes by 60GiB d = pl.scan_parquet(&quot;example_input.parquet&quot;) d = d.with_columns((pl.col('epoch_nanos') // DOWNSAMPLE_NANOS).alias('ts_bucket')) d = d.filter( (pl.col('ticker') != pl.col('ticker').shift(-1).fill_null('EOF')) |(pl.col('ts_bucket') != pl.col('ts_bucket').shift(-1)) ).drop('ts_bucket') print(d.explain(engine='streaming')) d.sink_parquet(&quot;example_output.parquet&quot;,engine='streaming') </code></pre> <p>output:</p> <pre><code>-rw-rw-r--. 1 ec2-user ec2-user 9.4G Aug 19 20:42 example_input.parquet 1.32.3 simple π 3/3 [&quot;ticker&quot;, &quot;epoch_nanos&quot;, ... 1 other column] FILTER [([(col(&quot;ticker&quot;)) != (col(&quot;ticker&quot;).shift([dyn int: -1]).fill_null([&quot;EOF&quot;]))]) | ([(col(&quot;ts_bucket&quot;)) != (col(&quot;ts_bucket&quot;).shift([dyn int: -1]))])] FROM WITH_COLUMNS: [[(col(&quot;epoch_nanos&quot;)) floor_div (1000000000)].alias(&quot;ts_bucket&quot;)] Parquet SCAN [example_input.parquet] PROJECT 3/3 COLUMNS </code></pre>
<python><dataframe><parquet><python-polars>
2025-08-19 20:54:34
1
6,651
tba
79,740,335
759,880
Python mutex.cc lock issue
<p>I have a python program that hangs in a library call on this message:</p> <pre><code>[mutex.cc : 452] RAW: Lock blocking 0x6000009e1158 @ </code></pre> <p>I started the program with <code>trace</code> to see what is going on, and I get (a lot of):</p> <pre><code>&lt;frozen importlib._bootstrap&gt;(668): &lt;frozen importlib._bootstrap&gt;(670): &lt;frozen importlib._bootstrap&gt;(676): --- modulename: _bootstrap, funcname: module_from_spec &lt;frozen importlib._bootstrap&gt;(569): &lt;frozen importlib._bootstrap&gt;(570): &lt;frozen importlib._bootstrap&gt;(573): --- modulename: _bootstrap_external, funcname: create_module &lt;frozen importlib._bootstrap_external&gt;(1233): &lt;frozen importlib._bootstrap_external&gt;(1234): &lt;frozen importlib._bootstrap_external&gt;(1233): --- modulename: _bootstrap, funcname: _call_with_frames_removed </code></pre> <p>In fact, those are the 3 last lines before the mutex message.</p> <p>What do those lines mean? Is that a clue to the mutex issue?</p> <p>I'm on the latest MacOS with python 3.11.13. The shell is <code>zsh</code>. The program is executed with <code>python3 -m trace --trace myscript.py</code>.</p>
<python><python-3.x><mutex>
2025-08-19 19:36:03
0
4,483
ToBeOrNotToBe
79,740,174
524,368
How to unpack a buffer of 12-bit values into an array of normalized float32
<p>A measurement system (in our lab) produces data of 12 bits per sample in a packed format, i.e. 2 samples of 12 bits each are packed into 3 bytes:</p> <pre class="lang-none prettyprint-override"><code> buf[l + 2] | buf[l + 1] | buf[l + 0] 7 6 5 4 3 2 1 0|7 6 5 4 3 2 1 0|7 6 5 4 3 2 1 0 -----------------------|----------------------- B A 9 8 7 6 5 4 3 2 1 0|B A 9 8 7 6 5 4 3 2 1 0 sample[2*i + 1] | sample[2*i + 0] </code></pre> <p>For NumPy I created the following unpacking function that will take a Python byte buffer apply some stride tricks and bit manipulations to it, returning the desired <em>float32</em> array:</p> <pre class="lang-py prettyprint-override"><code>def unpack_s12p_to_f32(buf): import numpy import numpy.lib.stride_tricks as npst s12p = numpy.frombuffer(buf, dtype=numpy.int32) s12p_sv = numpy.copy( numpy.transpose( npst.as_strided(s12p, shape=(2, int((s12p.size*4)/3)), strides=(0,3), writeable=False) )) m12b = (1&lt;&lt;12)-1 s12p_sv[:,0] &amp;= m12b s12p_sv[:,0] &lt;&lt;= 20 s12p_sv[:,1] &gt;&gt;= 12 s12p_sv[:,1] &amp;= m12b s12p_sv[:,1] &lt;&lt;= 20 return s12p_sv.reshape(-1).astype(numpy.float32) * (2.**-31) </code></pre> <p>We now seek a method to replicate this function within Matlab. However, I was unsuccessful finding/identifying equivalent functions that would allow me to manipulate Matlab array objects in the same way.</p> <hr /> <h3>Example data and conversion result</h3> <p>From one of our datasets I extracted 200 samples. When written as Python buffer literal, passed through the above unpacking function and plotted using Matplotlib it looks like this</p> <pre class="lang-py prettyprint-override"><code>from matplotlib.pyplot import plot,show data = \ b'\x19P\x05g\x90\x05I\x10\x01\xcf_\xfa\x87\x7f\xf5a\xbf\xf7\xb7\xff\xff9\xf0\x04]P' \ b'\x04)\x90\xfe\xad\xdf\xf6M\x1f\xf4s\x7f\xfb\xf5\x7f\x02=\xd0\x04W\xf0\x01\xfb\x7f' \ b'\xfb\x81\xff\xf6\x85\x7f\xf7\x8f\xbf\xfb\x05p\x03O\x90\x045\x90\x02\xf7\x7f\xfb' \ b'\x7f\xff\xf6q_\xf7\xb7\xff\xff!p\x03G\x90\x02\xdd\x7f\xfb\xc3\xdf\xf9s\xff\xf6' \ b'\x91?\xfb\xeb?\x011\xf0\x035\xb0\xff\xe1\xff\xfa\x81\xbf\xf6\x89\x9f\xf9\xc7\x9f' \ b'\xfe\r\x90\x02=\xf0\x02\x19\x90\xfe\xc3?\xf9\xa3\x1f\xfa\x8d\x7f\xf7\x99\xbf\xfc' \ b'\t\x10\x03=0\x02\x13\xb0\xfe\xb3?\xf8\x8b\xff\xf7\x83\xbf\xf9\xcd_\xfe\x050\x01' \ b'\x130\xff\xd3?\xfc\xb9\xbf\xfa\xa5\xdf\xf9\xa5_\xfc\xe9\xdf\xfe\xdb\xff\xfd\xf7' \ b'\x9f\xff\xe7\x9f\xfb\xcb?\xfc\xbd\xff\xf9\xab\xff\xfc\xfd\xdf\xff\xf2O\xfe\xe2\xcf' \ b'\xfc\xdc\xcf\xfc\xd4\xaf\xfd\xe8\xcf\xfd\xdc\xcf\xfd\xfc\x8f\x00\xfa\xaf\xfe\xec' \ b'\x0f\xfd\xc6/\xfc\xde\x0f\xff\xf2\xcf\xfd\xe2\x8f\xfe\xe8/\xff\xf4o\xfc\xce\xcf\xff' \ b'\x08@\xff\xf8\xef\xfd\xe8o\x00\x10`\xff\xe6/\xfd\xeco\xff\x06`\x00\xfe\x8f\xfe\xfa' \ b'\xef\xff\xe2\xaf\xfc\xdao\xfe\x00\x00\x00\xf8\x8f\xfd\xe0\xef\xfe\x00\xc0\xfe\xeao\xfe' plot( unpack_s12p_to_f32(data) ) show() </code></pre> <p>producing the following output</p> <p><a href="https://i.sstatic.net/vT53hvWo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vT53hvWo.png" alt="200 samples from an example 12 bit/sample dataset, unpacked with the shown function and plotted using matplotlib" /></a></p>
<python><numpy><matlab><bit-manipulation><data-conversion>
2025-08-19 16:22:11
4
163,045
datenwolf
79,740,068
1,549,950
release-please not updating uv.lock when creating a new release
<p>We are using <code>uv</code> to manage our Python packages and <code>release-please</code> to create our releases. The workflow that creates the new releases in GitHub currently does not update the version of our package in <code>uv.lock</code>.</p> <p><code>release-please</code> currently updates the following files:</p> <ul> <li><code>.release-please-manifest.json</code></li> <li><code>CHANGELOG.md</code></li> <li><code>pyproject.toml</code></li> </ul> <p>It should additionally change the version number in:</p> <ul> <li><code>uv.lock</code></li> </ul> <p>Here's the <code>release-please-config.yaml</code> we are using:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;packages&quot;: { &quot;.&quot;: { &quot;package-name&quot;: &quot;&lt;OUR PACKAGE NAME&gt;&quot;, &quot;path&quot;: &quot;src&quot;, &quot;release-type&quot;: &quot;python&quot;, &quot;bump-minor-pre-major&quot;: true, &quot;changelog-path&quot;: &quot;CHANGELOG.md&quot;, &quot;include-component-in-tag&quot;: false } }, &quot;$schema&quot;: &quot;https://raw.githubusercontent.com/googleapis/release-please/main/schemas/config.json&quot; } </code></pre> <p>Here is how we integrate it in our GitHub workflow:</p> <pre class="lang-yaml prettyprint-override"><code>jobs: release-please: runs-on: [ubuntu-latest] outputs: releases_created: ${{ steps.release.outputs.releases_created }} release_created: ${{ steps.release.outputs.release_created }} release_id: ${{ steps.release.outputs.id }} release_name: ${{ steps.release.outputs.name }} release_tag_name: ${{ steps.release.outputs.tag_name }} release_sha: ${{ steps.release.outputs.sha }} release_body: ${{ steps.release.outputs.body }} release_html_url: ${{ steps.release.outputs.html_url }} release_draft: ${{ steps.release.outputs.draft }} release_upload_url: ${{ steps.release.outputs.upload_url }} release_path: ${{ steps.release.outputs.path }} release_version: ${{ steps.release.outputs.version }} release_major: ${{ steps.release.outputs.major }} release_minor: ${{ steps.release.outputs.minor }} release_patch: ${{ steps.release.outputs.patch }} release_pr_number: ${{ steps.release.outputs.prNumber }} paths_released: ${{ steps.release.outputs.paths_released }} prs_created: ${{ steps.release.outputs.prs_created }} steps: - name: release please uses: googleapis/release-please-action@a02a34c4d625f9be7cb89156071d8567266a2445 # ratchet:googleapis/release-please-action@v4 id: release with: token: ${{ secrets.GITHUB_TOKEN }} config-file: release-please-config.json manifest-file: .release-please-manifest.json </code></pre> <p>We could now introduce another step that updates <code>uv.lock</code> and creates another commit - but ideally, changes needed for a release to all necessary files should be kept together in one commit.</p> <p>What we tried already (according to the documentation) is adding the <code>uv.lock</code> to the <code>extra-files</code> in the <code>release-please-config.json</code> - but that also didn't help.</p>
<python><github-actions><uv><release-please>
2025-08-19 14:36:00
1
8,360
Michael Lihs
79,740,047
8,595,891
uv dependency resolution error: "conflicting URLs for package" with pyproject.toml workspace and optional dependencies
<p>I am using <code>uv</code> to manage my dependency. My <code>pyproject.toml</code> looks like</p> <pre class="lang-toml prettyprint-override"><code>[project] name = &quot;project_1&quot; version = &quot;2.3.0&quot; description = &quot;My Project description.&quot; requires-python = &quot;&gt;=3.12&quot; dependencies = [ &quot;bleach==6.2.0&quot;, &quot;dataclass-wizard==0.27.0&quot;, &quot;pymilvus==2.5.8&quot;, &quot;pymilvus-model==0.3.2&quot;, ] [project.optional-dependencies] partb = [ &quot;langchain-openai==0.2.8&quot;, &quot;opentelemetry-exporter-otlp==1.29.0&quot;, ] parta = [ &quot;overrides==7.7.0&quot;, &quot;opentelemetry-api==1.29.0&quot;, ] all = [ &quot;my-project[parta,partb]&quot;, ] [tool.uv.sources] my-project = { workspace = true } [tool.setuptools] package-dir = {&quot;&quot; = &quot;src&quot;} include-package-data = true [tool.setuptools.packages.find] where = [&quot;src&quot;] </code></pre> <p>This was working well with older version of uv i.e.</p> <pre class="lang-bash prettyprint-override"><code>$ uv --version uv 0.7.4 </code></pre> <p>But in latest version they decided to change this and now I am getting error</p> <pre class="lang-bash prettyprint-override"><code>uv pip install -e .[all] × Failed to resolve dependencies for `my-project` (v2.3.0) ╰─▶ Requirements contain conflicting URLs for package `my-project`: - file:///home/User/Documents/my-project </code></pre> <p>The suggestion I am getting are to use <code>my-project[partb, parta]</code> and remove <code>my-project[all]</code> from the support. But since I already have good user base who are familiar with this flow, I don't want to break this.</p> <p>Another thing suggested is to maintain separate all block which will have copy of both parta and partb which is not easy to maintain.</p> <p>How to make it work?</p>
<python><uv>
2025-08-19 14:24:00
0
1,362
Pranjal Doshi
79,739,849
4,423,458
Get the first non-null value for a key in multiple mappings/dictionaries
<p>I have an object which stores its internal attributes in a <code>TypedDict</code>. I am implementing a flyweight pattern by allowing subclasses to freeze values and/or define defaults at the class-level to save memory.</p> <pre class="lang-py prettyprint-override"><code>from typing import ClassVar, TypedDict class SomeBaseClass: &quot;&quot;&quot;Some class that can define some behaviour.&quot;&quot;&quot; class State(TypedDict, total=False): &quot;&quot;&quot;Defines the fields that the base class can have.&quot;&quot;&quot; a: int b: str state: State &quot;&quot;&quot;Instance-level field values.&quot;&quot;&quot; frozen: ClassVar[State] = State() &quot;&quot;&quot;Class-level frozen field values.&quot;&quot;&quot; defaults: ClassVar[State] = State() &quot;&quot;&quot;Class-level default field values.&quot;&quot;&quot; def __init__(self, state: State) -&gt; None: &quot;&quot;&quot;Initialize the object with a state.&quot;&quot;&quot; self.state = state if set(self.state).intersection(self.frozen): msg = &quot;Attempted to override frozen values.&quot; raise ValueError(msg) </code></pre> <p>To access an instance's attribute value, I need to return the first non-null value found in <code>state</code>, <code>frozen</code>, or <code>default</code>, and raise an error if no such value exists.</p> <p>I have found two implementations that work . But I am wondering if there are cleaner ways to retrieve the first non-null value for a key in an iterable of mappings/dictionaries than what I am currently doing.</p> <ul> <li><p>Option a: use <code>.get()</code></p> <pre class="lang-py prettyprint-override"><code> @property def a(self) -&gt; int: &quot;&quot;&quot;Return the first non-null value using `.get()`.&quot;&quot;&quot; if ( output := self.state.get(&quot;a&quot;, self.frozen.get(&quot;a&quot;, self.defaults.get(&quot;a&quot;))) ) is None: msg = &quot;`a` not found in state, frozen or defaults&quot; raise ValueError(msg) return output </code></pre> <ul> <li>Pros: retrieving the value (or lack thereof) can be achieved as a one-liner.</li> <li>Cons: the nested structure feels clunky and not incredibly readable</li> </ul> </li> <li><p>Option b: use <code>next()</code></p> <pre class="lang-py prettyprint-override"><code> @property def b(self) -&gt; str: &quot;&quot;&quot;Return the first non-null value of `b` using `next`.&quot;&quot;&quot; if ( output := next( ( value for mapping in (self.state, self.frozen, self.defaults) if (value := mapping.get(&quot;b&quot;)) is not None ), None, ) ) is None: msg = &quot;`b` not found in state, frozen or defaults&quot; raise ValueError(msg) return output </code></pre> <ul> <li>Pros: Extendable to more mappings (though not needed in my use case)</li> <li>Cons: takes more lines of code</li> </ul> </li> </ul> <p>I also thought of using the <code>or</code> syntax:</p> <pre class="lang-py prettyprint-override"><code>return self.state.get(&quot;a&quot;) or self.frozen.get(&quot;a&quot;) or self.defaults.get(&quot;a&quot;) </code></pre> <p>This won't work as my values are allowed to be falsy (i.e. <code>0</code> for an <code>int</code> variable, <code>&quot;&quot;</code> for a <code>str</code> variable, an empty collection etc.)</p>
<python>
2025-08-19 11:39:34
1
642
Valentin Calomme
79,739,748
1,581,090
How to fix DpiAwarenessContext Qt error in the context of pytest on Windows 11?
<p>On Windows 11 I am trying to run pytest using Poetry and a very complex test setup, which uses Qt for some things. However, when running this complex test (which worked before, and seems to work for everyone else) I get an error:</p> <pre class="lang-none prettyprint-override"><code>qt.qpa.window: SetProcessDpiAwarenessContext() failed: Access is denied. Qt's default DPI awareness context is DPI_AWARENESS_CONTEXT_PER_MONITOR_AWARE_V2. If you know what you are doing, you can overwrite this default using qt.conf (https://doc.qt.io/qt-6/highdpi.html#configuring-windows). </code></pre> <p>As <a href="https://doc.qt.io/qt-6/highdpi.html#configuring-windows" rel="nofollow noreferrer">the web page</a> suggests I added the text</p> <pre class="lang-toml prettyprint-override"><code>[Platforms] WindowsArguments = dpiawareness=0,1,2 </code></pre> <p>to a newly created file <code>qt.conf</code> in the folder I am starting the test as</p> <pre class="lang-bash prettyprint-override"><code>poetry run pytest .\mytests\test1.py::test_performance </code></pre> <p>However, that does not solve the issue. The error is still there and the test does not run. Is there a way to solve this problem?</p> <p>Setting an environment variable <code>QT_QPA_PLATFORM=&quot;windows:dpiawareness=2</code> does not solve the problem. Running this as admin might work, but I would prefer to run a test as simple user. Or do I have to run Qt applications always as admin on windows...?</p>
<python><windows><qt><pytest><python-poetry>
2025-08-19 09:58:00
1
45,023
Alex
79,739,660
2,836,175
How to correctly type hint a mapping (dict) with Literal keys?
<p>I want to type hint a dictionary (or equivalent object) that maps from a set of <code>Literal</code> choices to some outputs. I'm using <a href="https://google.github.io/pytype/" rel="nofollow noreferrer">pytype</a> as my type checker.</p> <p>The ideal behaviour would be something like:</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal SupportedKey = Literal[&quot;one&quot;, &quot;two&quot;] d : dict[SupportedKey, int] = {&quot;one&quot;: 1, &quot;two&quot;: 2} </code></pre> <p>However, pytype doesn't recognise that <code>&quot;one&quot;</code> and <code>&quot;two&quot;</code> are in fact <code>SupportedKey</code>s, and so fails with</p> <pre><code>Type annotation for d does not match type of assignment [annotation-type-mismatch] Annotation: dict[Literal['one', 'two'], int] Assignment: dict[str, Literal[1, 2]] </code></pre> <p>Note that the mismatch in type assignment only matters in the keys, as:</p> <ul> <li><code>str</code> <strong>is not</strong> assignable to <code>Literal['one', 'two']</code></li> <li><code>Literal[1, 2]</code> <strong>is</strong> assignable to <code>int</code></li> </ul> <p>I tried a more explicit method, where the keys are constructed with appropriate typing:</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal, Final SupportedKey = Literal[&quot;one&quot;, &quot;two&quot;] KEY_ONE: Final[SupportedKey] = &quot;one&quot; KEY_TWO: Final[SupportedKey] = &quot;two&quot; d : dict[SupportedKey, int] = {KEY_ONE: 1, KEY_TWO: 2} </code></pre> <p>However, this fails in a similar way - the keys are still inferred as the more general str rather than Literal:</p> <pre><code>Type annotation for d does not match type of assignment [annotation-type-mismatch] Annotation: dict[Literal['one', 'two'], int] Assignment: dict[str, int] </code></pre> <p>What's the best way of achieving a correctly typed mapping with Literals as the keys?</p>
<python><python-typing><pytype>
2025-08-19 08:47:15
1
939
theo-brown
79,739,357
759,880
mutex.cc : 452 RAW: Lock blocking in HuggingFace/sententce-transformers
<p>I'm in python 3.11.13 with these versions:</p> <pre><code>huggingface-hub 0.31.4 transformers 4.52.4 sentence-transformers 5.1.0 </code></pre> <p>And this OS (Mac):</p> <pre><code>Darwin G9XFDK7K6J 24.5.0 Darwin Kernel Version 24.5.0: Tue Apr 22 19:53:27 PDT 2025; root:xnu-11417.121.6~2/RELEASE_ARM64_T6041 arm64 </code></pre> <p>When I try:</p> <pre><code>from transformers import AutoModel model = AutoModel.from_pretrained(&quot;sentence-transformers/all-MiniLM-L6-v2&quot;) </code></pre> <p>in a python shell, just after <code>from transformers import AutoModel</code>,</p> <p>I get:</p> <pre><code>[mutex.cc : 452] RAW: Lock blocking 0x600003435278 @ </code></pre> <p>The hex address changes depending on which version of HF/transformers/sentence-transformers I've tried... and I've tried a few. I've tried various versions, removing the huggingface-hub cache, changing python version, reinstalling python, reinstalling site-packages, and I still get this error. I have also restarted my Mac a few times.</p> <p>Any idea how I could overcome this?</p>
<python><python-3.x><huggingface-transformers><sentence-transformers>
2025-08-19 00:38:39
1
4,483
ToBeOrNotToBe
79,739,172
7,295,599
How to rotate a page by arbitrary angle in pymupdf?
<p>I couldn't find how to rotate a text page of a PDF by an arbitraty angle in the <a href="https://pymupdf.readthedocs.io/en/latest/" rel="nofollow noreferrer">pymupdf documentation</a>.</p> <p>There is <code>page.set_rotation(angle)</code>, however, which only allows for <code>0, 90, 180, 270</code> degrees, which is also combined in case of <code>90</code> and <code>270</code> degrees by a page orientation change.</p> <p>So, I was hoping for something like:</p> <pre><code>import pymupdf doc = pymupdf.open(&quot;Rest.pdf&quot;) # open document page = doc[0] # get the 1st page of the document page.set_rotation(0) # rotate the page matrix = pymupdf.Matrix(1,0,0,1,0,0) matrix.prerotate(2.5) # 2.5 degrees page.add_transformation(matrix) # BUT THIS DOES NOT EXIST! doc.save(&quot;Test_rotated.pdf&quot;) </code></pre> <p>Can this be done at all? Or do I maybe have to convert the text to an image and then rotate that image? Thank you for any hints.</p>
<python><pdf><rotation><pymupdf>
2025-08-18 19:37:09
3
27,030
theozh
79,739,170
1,779,973
PyCharm Debugger Stuck on "Collecting data..." When Debugging Pytest Unit Test
<p>I'm encountering a persistent issue with PyCharm when trying to debug a <code>pytest</code> unit test.</p> <p>Running the project normally and attaching the debugger works fine. Debugging regular scripts behaves as expected. But when I debug a <code>pytest</code> unit test:</p> <ul> <li>The debugger halts at the first breakpoint.</li> <li>It shows &quot;Collecting data...&quot; instead of displaying variable values.</li> <li>Clicking &quot;Step Over&quot; or &quot;Continue&quot; does nothing — the debugger appears frozen.</li> </ul> <p>Environment:</p> <ul> <li>PyCharm version: 2025.2.0.1</li> <li>Python version: 3.12.11</li> <li>OS: macOS</li> <li>Venv Manager: uv</li> <li>pytest version: 8.4.0</li> </ul> <p>I've tried:</p> <ul> <li>Restarting PyCharm and invalidating caches.</li> <li>Simplifying <code>__str__</code> and <code>__repr__</code> methods in my code.</li> <li>Enabling <strong>Gevent compatible mode</strong> in the debugger settings — no improvement.</li> <li>Verified that <code>--cov</code> is <strong>not</strong> present in my <code>pytest.ini</code>.</li> <li>Tried adding <code>--no-cov</code> to the <strong>Additional Arguments</strong> in the Run/Debug Configuration, but this caused an error: <pre><code>unrecognized arguments: --no-cov </code></pre> </li> </ul> <p>Has anyone found a reliable fix for this issue? Is there a workaround to get PyCharm's debugger working properly with <code>pytest</code>?</p>
<python><unit-testing><debugging><pycharm><pytest>
2025-08-18 19:36:14
0
536
Ido
79,739,054
11,222,417
how to browse lines in vscode debug console?
<p>Suppose I execute a few lines in vscode python debug console</p> <pre><code>a = 1 b = 1 print(a + b) </code></pre> <p>Then I want to re-run this snippet, but to edit the second line to <code>b = 99</code>. To do this I click <kbd>up arrow</kbd> to display the previous snippet and the cursor is at the first snippet line. To browse to the second line I click <kbd>down arrow</kbd>, but the debugger thinks I want to browse between different code snippets so it displays the &quot;next&quot; code snippet, instead of the next line to put the cursor in.</p> <p>How to browse lines in the debug console?</p>
<python><visual-studio-code><debug-console>
2025-08-18 17:07:07
0
305
J. Doe
79,738,840
10,911,376
Authenticating a user with NextCloud using oauth2 with authlib in flask fails at getting access token
<p>I am writing a flask app that authenticates users via oauth2 with a NextCloud instance (and later will use file synchronisation). From what I read this should be fairly straightforward. For example authlib describes how to create a oauth2 client with flask: <a href="https://docs.authlib.org/en/latest/client/index.html" rel="nofollow noreferrer">https://docs.authlib.org/en/latest/client/index.html</a></p> <p>Here's the current implementation:</p> <pre class="lang-py prettyprint-override"><code>import requests from flask import Flask, render_template, jsonify, request, session, url_for, redirect from flask_session import Session from authlib.integrations.flask_client import OAuth app = Flask(&quot;webapp&quot;) # app.config is set here, specifically settings: # NEXTCLOUD_CLIENT_ID # NEXTCLOUD_SECRET # NEXTCLOUD_API_BASE_URL # NEXTCLOUD_AUTHORIZE_URL # NEXTCLOUD_ACCESS_TOKEN_URL # set session to be managed server-side Session(app) # register NextCloud oauth oauth = OAuth(app) nextcloud = oauth.register('nextcloud') @app.route(&quot;/&quot;, methods=[&quot;GET&quot;]) def index(): return render_template(&quot;index.html&quot;), 200 @app.route(&quot;/nextcloud_login&quot;, methods=[&quot;GET&quot;]) def nextcloud_login(): redirect_uri = url_for(&quot;callback_nextcloud&quot;, _external=True) return nextcloud.authorize_redirect(redirect_uri) @app.route('/callback/nextcloud', methods=[&quot;GET&quot;]) def callback_nextcloud(): token = nextcloud.authorize_access_token() session[&quot;nextcloud_token&quot;] = token return redirect(url_for(&quot;index&quot;)) </code></pre> <p>The <code>index</code> route shows a page with a link to the login route. If clicked the user is sent to the NextCloud instance using the <code>NEXTCLOUD_AUTHORIZE_URL</code> and upon agreeing to the auth, is sent back to the route <code>/callback/nextcloud</code> (registered that way in NextCloud). I can confirm it is working up to this point and the request to the callback route looks like <code>GET /callback/nextcloud?state=some-token&amp;code=even-longer-token HTTP/1.1</code>. However when the line <code>token = nextcloud.authorize_access_token()</code> is executed, authlib tries to get an access token in the background, but this fails with a 500 response and no error message. It took some digging to find an error message on the NextCloud server (see below). Now it is entirely possible that the issue is with NextCloud and has nothing to do with the code, but I find this very unlikely as I unsuccessfully spent hours trying to find a shred of useful information on the web and it's unlikely I'm the first to notice this issue. At the moment I'm assuming my implementation is at fault and I'm hopefully just overlooking something very simple.</p> <p>NextCloud error message in the server log: <code>OC\\Security\\Crypto::calculateHMAC(): Argument #1 ($message) must be of type string, null given, called in /var/www/nextcloud/apps/oauth2/lib/Controller/OauthApiController.php on line 142 in file '/var/www/nextcloud/lib/private/Security/Crypto.php' line 42</code></p>
<python><flask><oauth-2.0><nextcloud><authlib>
2025-08-18 14:04:44
1
696
Etienne Ott
79,738,668
3,732,793
Python package upgrade with uv behaves as in old version
<p>for an old project I have upgraded jsonschema. For a prototype project I have added jsonschema to check the new functionality with jsonschema Draft7Validators.</p> <pre><code>uv pip show jsonschema </code></pre> <p>shows in both cases the same version.</p> <p>Also</p> <pre><code>uv tree </code></pre> <p>shows the same dependencies versions.</p> <p>But either in VSCode and PyCharm the new parameters for Draf7Validator are not resolved and marked as unexpected in the old project but work fine for the prototype.</p> <p>How to correctly get the newest jsonschema version with UV for the old project ?</p>
<python><jsonschema><uv>
2025-08-18 11:53:56
1
1,990
user3732793