Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Can't make 2nd massage #3105

Open
1 of 4 tasks
zeinalfa708 opened this issue Jun 25, 2024 · 4 comments
Open
1 of 4 tasks

bug: Can't make 2nd massage #3105

zeinalfa708 opened this issue Jun 25, 2024 · 4 comments
Assignees
Labels
P1: important Important feature / fix type: bug Something isn't working

Comments

@zeinalfa708
Copy link

  • I have searched the existing issues

Current behavior

i can't send any singel word/messages to Gemma 2B Q4. as the screen shot explain
image

Minimum reproduction step

  1. start a conversation
  2. enter what ever you like and send
  3. after the AI generate. you type whatever again. and it shows this error

Expected behavior

It should be normal/resume the conversation

Screenshots / Logs

LOG:

20240625 11:59:12.400000 UTC 8124 INFO sent the non stream, waiting for respone - llamaCPP.cc:424
20240625 11:59:16.945000 UTC 8060 INFO Messages:[
{
"content" : "oi",
"role" : "user"
},
{
"content" : "Hello! It's nice to hear from you. How can I assist you today?",
"role" : "assistant"
},
{
"content" : "Long time no see",
"role" : "user"
}
]

  • llamaCPP.cc:211
    20240625 11:59:16.945000 UTC 8060 INFO Stop:[]
  • llamaCPP.cc:212
    20240625 11:59:16.945000 UTC 9608 INFO Wait for task to be released:4 - llamaCPP.cc:413
    20240625 11:59:16.945000 UTC 8060 DEBUG [makeHeaderString] send stream with transfer-encoding chunked - HttpResponseImpl.cc:535
    20240625 11:59:17.032000 UTC 8060 INFO Error during inference - llamaCPP.cc:386
    20240625 11:59:17.050000 UTC 9608 INFO Task completed, release it - llamaCPP.cc:416
    20240625 11:59:18.038000 UTC 8060 INFO Messages:[
    {
    "content" : "Summarize in a 10-word Title. Give the title only. "oi"",
    "role" : "user"
    }
    ]
  • llamaCPP.cc:211
    20240625 11:59:18.038000 UTC 8060 INFO Stop:[]
  • llamaCPP.cc:212
    20240625 11:59:18.038000 UTC 8060 INFO sent the non stream, waiting for respone - llamaCPP.cc:424
    20240625 11:59:18.088000 UTC 8060 INFO Here is the result:1 - llamaCPP.cc:428
    20240625 11:59:19.985000 UTC 8124 INFO Here is the result:0 - llamaCPP.cc:428
    20240625 11:59:44.720000 UTC 8124 INFO Clean cache threshold reached! - llamaCPP.cc:199
    20240625 11:59:44.720000 UTC 8124 INFO Cache cleaned - llamaCPP.cc:201
    20240625 11:59:44.720000 UTC 8124 INFO Messages:[
    {
    "content" : "p",
    "role" : "user"
    }
    ]
  • llamaCPP.cc:211
    20240625 11:59:44.720000 UTC 8124 INFO Stop:[]
  • llamaCPP.cc:212
    20240625 11:59:44.726000 UTC 8124 DEBUG [makeHeaderString] send stream with transfer-encoding chunked - HttpResponseImpl.cc:535
    20240625 11:59:48.461000 UTC 8124 INFO reached result stop - llamaCPP.cc:373
    20240625 11:59:48.461000 UTC 8124 INFO End of result - llamaCPP.cc:346
    20240625 11:59:48.507000 UTC 9608 INFO Task completed, release it - ll

image

Jan version

0.5.1

In which operating systems have you tested?

  • macOS
  • Windows
  • Linux

Environment details

2024-06-25T11:52:47.474Z [SPECS]::Version: 0.5.12024-06-25T11:52:47.474Z [SPECS]::CPUs: [{"model":"Intel(R) Core(TM) i3-7100 CPU @ 3.90GHz","speed":3912,"times":{"user":179625,"nice":0,"sys":114078,"idle":1733703,"irq":11968}},{"model":"Intel(R) Core(TM) i3-7100 CPU @ 3.90GHz","speed":3912,"times":{"user":137437,"nice":0,"sys":59656,"idle":1830140,"irq":796}},{"model":"Intel(R) Core(TM) i3-7100 CPU @ 3.90GHz","speed":3912,"times":{"user":401093,"nice":0,"sys":143578,"idle":1482578,"irq":1875}},{"model":"Intel(R) Core(TM) i3-7100 CPU @ 3.90GHz","speed":3912,"times":{"user":243031,"nice":0,"sys":89265,"idle":1694953,"irq":1250}}]
2024-06-25T11:52:47.475Z [SPECS]::Machine: x86_64
2024-06-25T11:52:47.475Z [SPECS]::Endianness: LE
2024-06-25T11:52:47.475Z [SPECS]::Parallelism: 4
2024-06-25T11:52:47.476Z [SPECS]::Free Mem: 470646784
2024-06-25T11:52:47.476Z [SPECS]::OS Version: Windows 11 Pro
2024-06-25T11:52:47.476Z [SPECS]::OS Platform: win32
2024-06-25T11:52:47.476Z [SPECS]::OS Release: 10.0.22631
2024-06-25T11:52:47.476Z [SPECS]::Total Mem: 8539951104

@zeinalfa708 zeinalfa708 added the type: bug Something isn't working label Jun 25, 2024
@Van-QA
Copy link
Contributor

Van-QA commented Jun 25, 2024

hi @zeinalfa708, can you try removing the extension folder and restart Jan?

@Van-QA Van-QA added the status: needs info This doesn't seem right, more information is requested label Jun 26, 2024
@zeinalfa708
Copy link
Author

zeinalfa708 commented Jun 27, 2024

sorry for the delay. where is it? can't find anywhere.
image

@Van-QA
Copy link
Contributor

Van-QA commented Jun 28, 2024

hi @zeinalfa708, you will f‌ind it here: https://jan.ai/docs/data-folder#folder-structure
image

@Van-QA
Copy link
Contributor

Van-QA commented Jul 1, 2024

The ‌‌issue is reprodu‌‌cible from our sid‌e, cc: @hahuyhoang411
Som‌‌ehow the stop word is e‌mpty ❌
image

@Van-QA Van-QA added P1: important Important feature / fix and removed status: needs info This doesn't seem right, more information is requested labels Jul 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P1: important Important feature / fix type: bug Something isn't working
Projects
Status: Planned
Development

No branches or pull requests

3 participants