GREEN   81   0
   1317 9.38 KB    148

LunarArtist_07_14 error

By Guest
Created: 2023-11-09 10:12:27
Expiry: Never
sfw

  1. Hitch Trailblazer has a puckered hole, and that bitch loves it - Sunny Flare
  2. Git LFS initialized.
  3. Cloning into '/content/fix'...
  4. remote: Enumerating objects: 74, done.
  5. remote: Total 74 (delta 0), reused 0 (delta 0), pack-reused 74
  6. Unpacking objects: 100% (74/74), 21.93 KiB | 1.15 MiB/s, done.
  7. author -- TheBloke
  8. repo -- MythoMax-L2-13B-GGUF
  9. branch -- main
  10. filename -- mythomax-l2-13b.Q6_K.gguf
  11. format -- gguf
  12. backend -- koboldcpp
  13. mode -- file
  14. beaks -- 13
  15. quantz -- q6_k
  16. quantz_num -- 6
  17. bits -- unknown
  18. pointer -- mythomax-l2-13b.Q6_K.gguf
  19. path_pointer -- /content/colabTemp/mythomax-l2-13b.Q6_K.gguf
  20.  
  21. ::: NOTIFICATIONS :::
  22. model has 6 quantz and 13b - Colab will work alright but higher context might not available
  23.  
  24. ::: Colab is magic :::
  25.  
  26. ...[context] user didn't set context. will calculate max context automatically
  27. ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
  28. ::: notebook automatically calculated the about-right* CONTEXT for this model:
  29. ::: 4096
  30. ::: if model does not work properly with that context (CUDA error) then try lower it by 512
  31. ::: it is only an approximate value - test and report back. especially with non-13b models
  32. ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
  33.  
  34. ::: DOWNLOAD BACKEND, MODEL, LORA :::
  35. ...[backend] processing the backend: kobold.cpp
  36. ...[cloudflare] nohup.out from the previous instance was detected, deleting
  37. ...[backend] kobold.cpp is already installed
  38. ...[cloudflare] launching Cloudflare and waiting for an answer...[model] model is already downloaded
  39. ...[LoRA] processing LoRA
  40.  
  41. ...[LoRA] LoRAs are specified, will process them now
  42. ...[LoRA] LoRA llama-2-13b-pny-3e is already downloaded
  43.  
  44. ::: LAUNCHING BACKEND :::
  45. nohup: appending output to 'nohup.out'
  46. Cloudflare tunnel was created, here is the link:
  47. 2023-11-09T10:05:27Z INF Thank you for trying Cloudflare Tunnel. Doing so, without a Cloudflare account, is a quick way to experiment and try it out. However, be aware that these account-less Tunnels have no uptime guarantee. If you intend to use Tunnels in production you should use a pre-created named tunnel by following: https://developers.cloudflare.com/cloudflare-one/connections/connect-apps
  48. 2023-11-09T10:05:27Z INF Requesting new quick Tunnel on trycloudflare.com...
  49. 2023-11-09T10:05:28Z INF +--------------------------------------------------------------------------------------------+
  50. 2023-11-09T10:05:28Z INF | Your quick Tunnel has been created! Visit it at (it may take some time to be reachable): |
  51. 2023-11-09T10:05:28Z INF +--------------------------------------------------------------------------------------------+
  52. 2023-11-09T10:05:28Z INF Cannot determine default configuration path. No file [config.yml config.yaml] in [~/.cloudflared ~/.cloudflare-warp ~/cloudflare-warp /etc/cloudflared /usr/local/etc/cloudflared]
  53. 2023-11-09T10:05:28Z INF Version 2023.10.0
  54. 2023-11-09T10:05:28Z INF GOOS: linux, GOVersion: go1.20.6, GoArch: amd64
  55. 2023-11-09T10:05:28Z INF Settings: map[ha-connections:1 protocol:quic url:http://localhost:5001]
  56. 2023-11-09T10:05:28Z INF Generated Connector ID: 039a7411-8213-4b9c-8190-81ebf24e5e6f
  57. 2023-11-09T10:05:28Z INF Autoupdate frequency is set autoupdateFreq=86400000
  58. 2023-11-09T10:05:28Z INF Initial protocol quic
  59. 2023-11-09T10:05:28Z INF ICMP proxy will use 172.28.0.12 as source for IPv4
  60. 2023-11-09T10:05:28Z INF ICMP proxy will use :: as source for IPv6
  61. 2023-11-09T10:05:28Z INF Starting metrics server on 127.0.0.1:33803/metrics
  62. 2023/11/09 10:05:28 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details.
  63. 2023-11-09T10:05:28Z INF Registered tunnel connection connIndex=0 connection=718e7e64-d24e-455d-97e6-2e427e21ec1a event=0 ip=198.41.200.113 location=ord06 protocol=quic
  64.  
  65. backend is launched with the following flags:
  66. python /content/colabMain/koboldcpp/koboldcpp.py --highpriority --threads 2 --usecublas normal 0 mmq --gpulayers 43 --hordeconfig MythoMax-L2-13B-GGUF --lora /content/colabTemp/llama-2-13b-pny-3e/adapter_model.bin --model /content/colabTemp/mythomax-l2-13b.Q6_K.gguf --context 4096
  67. ***
  68. Welcome to KoboldCpp - Version 1.46.1
  69. Setting process to Higher Priority - Use Caution
  70. High Priority for Linux Set: 0 to 1
  71. Attempting to use CuBLAS library for faster prompt ingestion. A compatible CuBLAS will be required.
  72. Initializing dynamic library: koboldcpp_cublas.so
  73. ==========
  74. Namespace(model='/content/colabTemp/mythomax-l2-13b.Q6_K.gguf', model_param='/content/colabTemp/mythomax-l2-13b.Q6_K.gguf', port=5001, port_param=5001, host='', launch=False, lora=['/content/colabTemp/llama-2-13b-pny-3e/adapter_model.bin'], config=None, threads=2, blasthreads=2, highpriority=True, contextsize=4096, blasbatchsize=512, ropeconfig=[0.0, 10000.0], smartcontext=False, bantokens=None, forceversion=0, nommap=False, usemlock=False, noavx2=False, debugmode=-1, skiplauncher=False, hordeconfig=['MythoMax-L2-13B-GGUF'], noblas=False, useclblast=None, usecublas=['normal', '0', 'mmq'], gpulayers=43, tensor_split=None, onready='', multiuser=False, foreground=False)
  75. ==========
  76. Loading model: /content/colabTemp/mythomax-l2-13b.Q6_K.gguf
  77. [Threads: 2, BlasThreads: 2, SmartContext: False]
  78.  
  79. ---
  80. Identified as LLAMA model: (ver 6)
  81. Attempting to Load...
  82. ---
  83. Using automatic RoPE scaling (scale:1.000, base:10000.0)
  84. System Info: AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 |
  85. ggml_init_cublas: found 1 CUDA devices:
  86. Device 0: Tesla T4, compute capability 7.5
  87. llama_model_loader: loaded meta data with 19 key-value pairs and 363 tensors from /content/colabTemp/mythomax-l2-13b.Q6_K.gguf (version GGUF V2 (latest))
  88. llm_load_print_meta: format = GGUF V2 (latest)
  89. llm_load_print_meta: arch = llama
  90. llm_load_print_meta: vocab type = SPM
  91. llm_load_print_meta: n_vocab = 32000
  92. llm_load_print_meta: n_merges = 0
  93. llm_load_print_meta: n_ctx_train = 4096
  94. llm_load_print_meta: n_embd = 5120
  95. llm_load_print_meta: n_head = 40
  96. llm_load_print_meta: n_head_kv = 40
  97. llm_load_print_meta: n_layer = 40
  98. llm_load_print_meta: n_rot = 128
  99. llm_load_print_meta: n_gqa = 1
  100. llm_load_print_meta: f_norm_eps = 0.0e+00
  101. llm_load_print_meta: f_norm_rms_eps = 1.0e-05
  102. llm_load_print_meta: n_ff = 13824
  103. llm_load_print_meta: freq_base_train = 10000.0
  104. llm_load_print_meta: freq_scale_train = 1
  105. llm_load_print_meta: model type = 13B
  106. llm_load_print_meta: model ftype = unknown, may not work (guessed)
  107. llm_load_print_meta: model params = 13.02 B
  108. llm_load_print_meta: model size = 9.95 GiB (6.56 BPW)
  109. llm_load_print_meta: general.name = LLaMA v2
  110. llm_load_print_meta: BOS token = 1 '<s>'
  111. llm_load_print_meta: EOS token = 2 '</s>'
  112. llm_load_print_meta: UNK token = 0 '<unk>'
  113. llm_load_print_meta: LF token = 13 '<0x0A>'
  114. llm_load_tensors: ggml ctx size = 10183.83 MB
  115. llm_load_tensors: using CUDA for GPU acceleration
  116. llm_load_tensors: mem required = 128.29 MB
  117. llm_load_tensors: offloading 40 repeating layers to GPU
  118. llm_load_tensors: offloading non-repeating layers to GPU
  119. llm_load_tensors: offloaded 43/43 layers to GPU
  120. llm_load_tensors: VRAM used: 10055.54 MB
  121. ...................................................................................................
  122. llama_new_context_with_model: n_ctx = 4096
  123. llama_new_context_with_model: freq_base = 10000.0
  124. llama_new_context_with_model: freq_scale = 1
  125. WARNING: failed to allocate 3202.00 MB of pinned memory: out of memory
  126. llama_kv_cache_init: offloading v cache to GPU
  127. llama_kv_cache_init: offloading k cache to GPU
  128. llama_kv_cache_init: VRAM kv self = 3200.00 MB
  129. llama_new_context_with_model: kv self size = 3200.00 MB
  130. llama_new_context_with_model: compute buffer total size = 363.88 MB
  131. llama_new_context_with_model: VRAM scratch buffer: 358.00 MB
  132. llama_new_context_with_model: total VRAM used: 13613.54 MB (model: 10055.54 MB, context: 3558.00 MB)
  133.  
  134. Attempting to apply LORA adapter: /content/colabTemp/llama-2-13b-pny-3e/adapter_model.bin
  135. llama_apply_lora_from_file_internal: applying lora adapter from '/content/colabTemp/llama-2-13b-pny-3e/adapter_model.bin' - please wait ...
  136. llama_apply_lora_from_file_internal: unsupported file version
  137. gpttype_load_model: error: failed to apply lora adapter
  138. Load Model OK: False
  139. Could not load model: /content/colabTemp/mythomax-l2-13b.Q6_K.gguf

Yandere Thread - Yandere Applejack (completed)

by Guest

Bonding with Nature

by Guest

The Long and Short of It (RGRE)

by Guest

incest relationships piece of the whole pie (lewd) by Frostybox[...]

by Guest

incest thread piece of the (non-canon) pie, limestone's pie by[...]

by Guest