-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 BUG: Uncaught exception #1401
Comments
i get the identical error in my workers project, and i don't use d3. running on macOS 13.5 with wrangler 3.5.0. also filed here: #1010 |
I tried again this morning, after rebooting overnight. The error is gone, the reboot fixed it. It wasn't the query at fault here, probably not d1 (directly anyway). There's something else going on, and somehow a reboot fixed it. |
I am getting this issue when quickly closing a connection to my worker |
I've transferred this to |
We'll need a simple reproduction example in order to diagnose this. |
I'm not sure if this is exactly the same thing:
(Note @jasnell I can reproduce this, but sometimes it requires patience.
Here are the associated log files from one reproduction. |
I was able to reproduce this using a bare minimum worker template with fetch('https://example.com'). Strangely it seems to work fine with fetch('http://ip-api.com/json'), so I think it's an issue with SSL. Here is the reproduction: https://github.com/cloudflare/workers-sdk/files/14214588/test-worker.zip I am using wrangler 3.28.0 and macOS 13.5 and node.js 18, but I don't think this matters because I've experienced this reliably on both my windows and mac computers, with different sdk versions and different node.js versions Last week I was not experiencing this and I have no idea why.. cc @jasnell |
Can we please rename this issue to something more useful? Like "internal error" on fetch with HTTPS cc @jasnell |
I would like to confirm this issue, but it seemed to resolve with a restart of my machine. I am currently using WSL2 though, that might have an affect on it but above people have mentioned that it happens regardless of OS. |
I am new to Cloudfare Workers, so I am starting the Rust example project, so I think I have the simplest reproduction:
The same error presented here, with different line number, maybe it isn ´t the same case but I find it weird that a fast and simple REST endpoint like this could fail so easily.
|
I am using playwright testing in github actions and this error is spammed. Doesn't seem to affect the behaviour of the server or success of the tests though, so not a huge deal for me. This also happens when running the tests locally but doesnt happen when running the dev server and using the app normally. |
Any updates on this? I keep getting similar errors:
I am running |
I had the exactly same error with minimal reproduction, based on this tutorial, with Ngrok hosting. WebSocket handshaking is succeeded, but in a few minutes, connection got lost because of this above error. Any solution? |
None that I am aware of sorry, plus this issue seems old, I doubt we are getting any response or solution anytime soon. |
Any time you see "internal error", there should be a separate log line actually logging the real error. You need to look for that log line. @martian0x80 in your example the actual error seems to be:
This seems to be a regular DNS lookup error, probably specific to your environment. Obviously the way these errors reported is not so great. Sorry about that. workerd's error logging is awkward since the code was originally designed to log errors that we, the Cloudflare Workers team, needed to address, while explicitly not logging problems that the application should address (those are simply delivered to the application as thrown exceptions). We really need to go back and come up with a better story here. |
@kentonv Thanks mate, so I cannot get some meaningful log for websocket close :( |
Thanks for the response, I just tested it on a new Github Codespace as well and I get the same errors, even on the Cloudflare Pages deployment, which throws a silent "error code: 1016" in the logs, this is what wrangler logs say:
This only occurs on pages that are server side rendered. I have no idea why 'internal_suspense_cache_hostname.local' is not resolved.
I wish I could debug this. |
I am seeing similar |
Is there anything new on this? I have the same error. And there is just the internal error without any other meaningful log.
|
After removing .next , .vercel and .wrangler directories I was able to stop seeing this error. I had been without running the preview command for a while, so not sure what exactly caused the issue. |
I just had this issue and managed to resolve it by using 127.0.0.1 instead of localhost for wrangler.
Seems to be some problem with DNS resolution.. which may explain why it worked for me on one machine and not the other.. |
Which Cloudflare product(s) does this pertain to?
Wrangler or miniflare
What version(s) of the tool(s) are you using?
3.5.0
What version of Node are you using?
18.16.1
What operating system are you using?
Linux (Ubuntu 20.04)
Describe the Bug
In one of my handlers, after executing a select query with batch, I get the following exception:
workerd/server/server.c++:2533: error: Uncaught exception: kj/async-io-unix.c++:186: disconnected: remote.worker_do_not_log; Request failed due to internal error
It seems it's in some kind of destructor or finalizer or other thread / cleanup task, since execution continues after the query, because I can see console.log output from subsequent code. However, the server crashes before it is able to return the response (client doesn't get a response back.) I can reproduce this every single run. Commenting out that query makes the exception go away. No other queries are executed in that request.
I deleted, the database, deleted .wrangler dir, and recreated the database. Same issue.
I created a new hello world project (following the getting started guide), containing only the "hello world" response and got the same exception on the first run only. Subsequent runs were fine. That's interesting, what state could wrangler be keeping that's the same for the machine, but not in the directory/project/code which all changed?
I switched back to the original project, reproduced the exception again. Went back to the new hello world, but it's fine now, I can't reproduce the exception in the hello world project again. Odd.
I then added the same database, same migrations, same query to the hello world project. No issues. I don't know how to make a minimal repro for it.
edit: I tried again this morning, after rebooting overnight. The error is gone, the reboot fixed it. It wasn't the query at fault here, probably not d1 (directly anyway). There's something else going on, and somehow a reboot fixed it.
It's tough to debug a problem with so little to go on. Is there anything I can do to get more info? Is there a way to build a debug version of wrangler or miniflare? Enable logging?
Please provide a link to a minimal reproduction
No response
Please provide any relevant error logs
No response
The text was updated successfully, but these errors were encountered: