国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

??
FSDP+Q-Lora ?? ??
?? ?? ??
????? ?? ? ??
使用 PyTorch FSDP、Q-Lora 和 SDPA 來(lái)微調(diào) LLM
可選步驟:將 LoRA 的適配器融入原始模型
? ?? ???? ?? ?? ? 250??? Hugging Face? ?? ???? Llama 3? ???? ?? ???? ??? ??????.

? 250??? Hugging Face? ?? ???? Llama 3? ???? ?? ???? ??? ??????.

May 06, 2024 pm 03:52 PM
python ai ?? ??? ???

僅用250美元,Hugging Face技術(shù)主管手把手教你微調(diào)Llama 3

??? Meta? ??? Llama 3, Mistral AI? ??? Mistral ? Mixtral ??, OpenAI? ???? ? AI21 Lab? ??? Jamba ? ?? ?? ?? ?? ??? ?????.

???? ?? ???? ??? ???? ??? ???? ?? ??? ???? ???? ??? ?? ?? ??? ?? ???? ???.

?? GPU?? Q-Learning? ???? ?? ?? ??? ?? ??? ?? ??(?: Mistral)? ?? ???? ?? ??? ??? Llama 370b ?? Mixtral? ?? ??? ??? ???? ?? ??? ??? ???????. ???? ??? ???.

??? Hugging Face? ?? ??? Philipp Schmid? Hugging Face? TRL, Transformers, peft ? ??? ?? ?????? ??? ?? PyTorch FSDP ? Q-Lora? ???? Llama 3? ?? ???? ??? ?????. FSDP ??? ??? PyTorch 2.2 ???? ?? Flash Attention v2? ??????.

?? ??? ?? ??? ??? ????.

  • ?? ?? ??
  • ??? ?? ?? ? ??
  • PyTorch FSDP, Q-Lora ? SDPA? ???? ??? ?? ?? ?? ??
  • ??? ?? ? ?? ??

??: ? ??? ?? ????. ??? NVIDIA H100 ? NVIDIA A10G GPU?? ?? ? ???????. ???? ??? ?? 24GB ???? ?? 4xA10G GPU? ????? ????. ???? ??? ??? ? ?? ?? 3???? ??? ?? ??(yaml ??)? ?? ?? ???? ???.

FSDP+Q-Lora ?? ??

Answer.AI, Q-Lora ??? Tim Dettmers ? Hugging Face? ???? ??? ?? ????? ???? ??? Q-Lora ? PyTorch FSDP(?? ?? ???)? ???? ???? ????. ??) ??? ? ?? ?? ??? ???? ????.

FSDP? Q-Lora? ???? ???? 2?? ??? ?? GPU(24GB)?? Llama 270b ?? Mixtral 8x7B? ?? ??? ? ????. ??? ??? ?? ??? ?????. ????? Hugging Face? PEFT ?????? ??? ??? ???.

?? ??: https://www.answer.ai/posts/2024-03-06-fsdp-qlora.html

PyTorch FSDP? GPU ??? ?? ??? ???? ??? ?? ??? ?? ? ?? ???/?? ?? ?????. ? ? ??? ? ????? ??? ? ????. Q-LoRA? ??? ? ?? ?? ???? ???? ?? ?? ??? ??? ??? ????? ??? ?? ?? ?????.

?? ?? ??

? ?? ??? trl, ??? ? ??? ??? ?? ?????? ???? Hugging Face Libraries ? Pyroch? ???? ????. trl? ?? ?? ??? ?? ??? ?? ??, RLHF ? ??? ? ?? ??? ??? ? ??? ??? ???? ??? ??? ????????.

# Install Pytorch for FSDP and FA/SDPA%pip install "torch==2.2.2" tensorboard# Install Hugging Face libraries%pip install--upgrade "transformers==4.40.0" "datasets==2.18.0" "accelerate==0.29.3" "evaluate==0.4.1" "bitsandbytes==0.43.1" "huggingface_hub==0.22.2" "trl==0.8.6" "peft==0.10.0"

???? Hugging Face? ????? Llama 3 70b ??? ????.

????? ?? ? ??

?? ??? ???? ????? ?? ? ??? ??? ? ????. ???? ?? ??? ???? ???? ????? ??? ?? ??? ????? ???. ??? ?? ??? ?? ??? ????? 2024? Hugging Face? LLM? ?? ???? ??? ?????.

?? ??: https://www.philschmid.de/fine-tune-llms-in-2024-with-trl#3-create-and-prepare-the-dataset

??? HuggingFaceH4/no_robots ?????? ??????. ? 10,000?? ??? ??? ???? ??? ??? ???? ??? ??? ??? ?????. ? ???? SFT(?? ?? ??)? ???? ?? ??? ??? ??? ? ? ???? ? ? ????. no_robots ??? ??? OpenAI?? ??? InstructGPT ??? ??? ?? ?? ??? ??? ??? ?? ?? ?? ?? ???? ?????.

{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}{"messages": [{"role": "system", "content": "You are..."}, {"role": "user", "content": "..."}, {"role": "assistant", "content": "..."}]}

no_robots ??? ??? 10,000? ??? 9,500?? ?? ??? 500?? ??? ??? ?????, ? ? ??? ??? ??? ???? ????. ???? ??? ?? ?????? ???? ??? ??? ????, ??? ??? ??? ????, ?? ??? json ??? ??????. ?? ??? ??? ????.

from datasets import load_dataset# Convert dataset to OAI messagessystem_message = """You are Llama, an AI assistant created by Philipp to be helpful and honest. Your knowledge spans a wide range of topics, allowing you to engage in substantive conversations and provide analysis on complex subjects."""def create_conversation(sample):if sample["messages"][0]["role"] == "system":return sampleelse:sample["messages"] = [{"role": "system", "content": system_message}] + sample["messages"]return sample# Load dataset from the hubdataset = load_dataset("HuggingFaceH4/no_robots")# Add system message to each conversationcolumns_to_remove = list(dataset["train"].features)columns_to_remove.remove("messages")dataset = dataset.map(create_conversation, remove_columns=columns_to_remove,batched=False)# Filter out conversations which are corrupted with wrong turns, keep which have even number of turns after adding system messagedataset["train"] = dataset["train"].filter(lambda x: len(x["messages"][1:]) % 2 == 0)dataset["test"] = dataset["test"].filter(lambda x: len(x["messages"][1:]) % 2 == 0)# save datasets to diskdataset["train"].to_json("train_dataset.json", orient="records", force_ascii=False)dataset["test"].to_json("test_dataset.json", orient="records", force_ascii=False)

使用 PyTorch FSDP、Q-Lora 和 SDPA 來(lái)微調(diào) LLM

接下來(lái)使用 PyTorch FSDP、Q-Lora 和 SDPA 對(duì)大語(yǔ)言模型進(jìn)行微調(diào)。作者是在分布式設(shè)備中運(yùn)行模型,因此需要使用 torchrun 和 python 腳本啟動(dòng)訓(xùn)練。

作者編寫了 run_fsdp_qlora.py 腳本,其作用是從磁盤加載數(shù)據(jù)集、初始化模型和分詞器并開(kāi)始模型訓(xùn)練。腳本使用 trl 庫(kù)中的 SFTTrainer 來(lái)對(duì)模型進(jìn)行微調(diào)。

SFTTrainer 能夠讓對(duì)開(kāi)源大語(yǔ)言模型的有監(jiān)督微調(diào)更加容易上手,具體來(lái)說(shuō)有以下幾點(diǎn):

格式化的數(shù)據(jù)集,包括格式化的多輪會(huì)話和指令(已使用)只對(duì)完整的內(nèi)容進(jìn)行訓(xùn)練,忽略只有 prompts 的情況(未使用)打包數(shù)據(jù)集,提高訓(xùn)練效率(已使用)支持參數(shù)高效微調(diào)技術(shù),包括 Q-LoRA(已使用)為會(huì)話級(jí)任務(wù)微調(diào)初始化模型和分詞器(未使用,見(jiàn)下文)

注意:作者使用的是類似于 Anthropic/Vicuna 的聊天模板,設(shè)置了「用戶」和「助手」角色。這樣做是因?yàn)榛A(chǔ) Llama 3 中的特殊分詞器(<|begin_of_text|> 及 <|reserved_special_token_XX|>)沒(méi)有經(jīng)過(guò)訓(xùn)練。

這意味著如果要在模板中使用這些分詞器,還需要對(duì)它們進(jìn)行訓(xùn)練,并更新嵌入層和 lm_head,對(duì)內(nèi)存會(huì)產(chǎn)生額外的需求。如果使用者有更多的算力,可以修改 run_fsdp_qlora.py 腳本中的 LLAMA_3_CHAT_TEMPLATE 環(huán)境變量。

在配置參數(shù)方面,作者使用了新的 TrlParser 變量,它允許我們?cè)?yaml 文件中提供超參數(shù),或者通過(guò)明確地將參數(shù)傳遞給 CLI 來(lái)覆蓋配置文件中的參數(shù),例如 —num_epochs 10。以下是在 4x A10G GPU 或 4x24GB GPU 上微調(diào) Llama 3 70B 的配置文件。

%%writefile llama_3_70b_fsdp_qlora.yaml# script parametersmodel_id: "meta-llama/Meta-Llama-3-70b" # Hugging Face model iddataset_path: "."# path to datasetmax_seq_len:3072 # 2048# max sequence length for model and packing of the dataset# training parametersoutput_dir: "./llama-3-70b-hf-no-robot" # Temporary output directory for model checkpointsreport_to: "tensorboard" # report metrics to tensorboardlearning_rate: 0.0002# learning rate 2e-4lr_scheduler_type: "constant"# learning rate schedulernum_train_epochs: 3# number of training epochsper_device_train_batch_size: 1 # batch size per device during trainingper_device_eval_batch_size: 1# batch size for evaluationgradient_accumulation_steps: 2 # number of steps before performing a backward/update passoptim: adamw_torch # use torch adamw optimizerlogging_steps: 10# log every 10 stepssave_strategy: epoch # save checkpoint every epochevaluation_strategy: epoch # evaluate every epochmax_grad_norm: 0.3 # max gradient normwarmup_ratio: 0.03 # warmup ratiobf16: true # use bfloat16 precisiontf32: true # use tf32 precisiongradient_checkpointing: true # use gradient checkpointing to save memory# FSDP parameters: https://huggingface.co/docs/transformers/main/en/fsdpfsdp: "full_shard auto_wrap offload" # remove offload if enough GPU memoryfsdp_config:backward_prefetch: "backward_pre"forward_prefetch: "false"use_orig_params: "false"

注意:訓(xùn)練結(jié)束時(shí),GPU 內(nèi)存使用量會(huì)略有增加(約 10%),這是因?yàn)槟P捅4嫠鶐?lái)的開(kāi)銷。所以使用時(shí),請(qǐng)確保 GPU 上有足夠的內(nèi)存來(lái)保存模型。

在啟動(dòng)模型訓(xùn)練階段,作者使用 torchrun 來(lái)更加靈活地運(yùn)用樣本,并且易于被調(diào)整,就像 Amazon SageMaker 及 Google Cloud Vertex AI 一樣。

對(duì)于 torchrun 和 FSDP,作者需要對(duì)環(huán)境變量 ACCELERATE_USE_FSDP 和 FSDP_CPU_RAM_EFFICIENT_LOADING 進(jìn)行設(shè)置,來(lái)告訴 transformers/accelerate 使用 FSDP 并以節(jié)省內(nèi)存的方式加載模型。

注意:如果想不使用 CPU offloading 功能,需要更改 fsdp 的設(shè)置。這種操作只適用于內(nèi)存大于 40GB 的 GPU。

本文使用以下命令啟動(dòng)訓(xùn)練:

!ACCELERATE_USE_FSDP=1 FSDP_CPU_RAM_EFFICIENT_LOADING=1 torchrun --nproc_per_node=4 ./scripts/run_fsdp_qlora.py --config llama_3_70b_fsdp_qlora.yaml

預(yù)期內(nèi)存使用情況:

  • 使用 FSDP 進(jìn)行全微調(diào)需要約 16 塊 80GB 內(nèi)存的 GPU
  • FSDP+LoRA 需要約 8 塊 80GB 內(nèi)存的 GPU
  • FSDP+Q-Lora 需要約 2 塊 40GB 內(nèi)存的 GPU
  • FSDP+Q-Lora+CPU offloading 技術(shù)需要 4 塊 24GB 內(nèi)存的 GPU,以及一塊具備 22 GB 內(nèi)存的 GPU 和 127 GB 的 CPU RAM,序列長(zhǎng)度為 3072、batch 大小為 1。

在 g5.12xlarge 服務(wù)器上,基于包含 1 萬(wàn)個(gè)樣本的數(shù)據(jù)集,作者使用 Flash Attention 對(duì) Llama 3 70B 進(jìn)行 3 個(gè) epoch 的訓(xùn)練,總共需要 45 小時(shí)。每小時(shí)成本為 5.67 美元,總成本為 255.15 美元。這聽(tīng)起來(lái)很貴,但可以讓你在較小的 GPU 資源上對(duì) Llama 3 70B 進(jìn)行微調(diào)。

如果我們將訓(xùn)練擴(kuò)展到 4x H100 GPU,訓(xùn)練時(shí)間將縮短至大約 125 小時(shí)。如果假設(shè) 1 臺(tái) H100 的成本為 5-10 美元 / 小時(shí),那么總成本將在 25-50 美元之間。

我們需要在易用性和性能之間做出權(quán)衡。如果能獲得更多更好的計(jì)算資源,就能減少訓(xùn)練時(shí)間和成本,但即使只有少量資源,也能對(duì) Llama 3 70B 進(jìn)行微調(diào)。對(duì)于 4x A10G GPU 而言,需要將模型加載到 CPU 上,這就降低了總體 flops,因此成本和性能會(huì)有所不同。

注意:在作者進(jìn)行的評(píng)估和測(cè)試過(guò)程中,他注意到大約 40 個(gè)最大步長(zhǎng)(將 80 個(gè)樣本堆疊為長(zhǎng)度為三千的序列)就足以獲得初步結(jié)果。40 個(gè)步長(zhǎng)的訓(xùn)練時(shí)間約為 1 小時(shí),成本約合 5 美元。

可選步驟:將 LoRA 的適配器融入原始模型

使用 QLoRA 時(shí),作者只訓(xùn)練適配器而不對(duì)整個(gè)模型做出修改。這意味著在訓(xùn)練過(guò)程中保存模型時(shí),只保存適配器權(quán)重,而不保存完整模型。

如果使用者想保存完整的模型,使其更容易與文本生成推理器一起使用,則可以使用 merge_and_unload 方法將適配器權(quán)重合并到模型權(quán)重中,然后使用 save_pretrained 方法保存模型。這將保存一個(gè)默認(rèn)模型,可用于推理。

注意:CPU 內(nèi)存需要大于 192GB。

#### COMMENT IN TO MERGE PEFT AND BASE MODEL ##### from peft import AutoPeftModelForCausalLM# # Load PEFT model on CPU# model = AutoPeftModelForCausalLM.from_pretrained(# args.output_dir,# torch_dtype=torch.float16,# low_cpu_mem_usage=True,# )# # Merge LoRA and base model and save# merged_model = model.merge_and_unload()# merged_model.save_pretrained(args.output_dir,safe_serialization=True, max_shard_size="2GB")

模型測(cè)試和推理

訓(xùn)練完成后,我們要對(duì)模型進(jìn)行評(píng)估和測(cè)試。作者從原始數(shù)據(jù)集中加載不同的樣本,并手動(dòng)評(píng)估模型。評(píng)估生成式人工智能模型并非易事,因?yàn)橐粋€(gè)輸入可能有多個(gè)正確的輸出。閱讀《評(píng)估 LLMs 和 RAG,一個(gè)使用 Langchain 和 Hugging Face 的實(shí)用案例》可以了解到關(guān)于評(píng)估生成模型的相關(guān)內(nèi)容。

文章地址:https://www.philschmid.de/evaluate-llm

import torchfrom peft import AutoPeftModelForCausalLMfrom transformers import AutoTokenizerpeft_model_id = "./llama-3-70b-hf-no-robot"# Load Model with PEFT adaptermodel = AutoPeftModelForCausalLM.from_pretrained(peft_model_id,torch_dtype=torch.float16,quantization_config= {"load_in_4bit": True},device_map="auto")tokenizer = AutoTokenizer.from_pretrained(peft_model_id)

接下來(lái)加載測(cè)試數(shù)據(jù)集,嘗試生成指令。

from datasets import load_datasetfrom random import randint# Load our test dataseteval_dataset = load_dataset("json", data_files="test_dataset.json", split="train")rand_idx = randint(0, len(eval_dataset))messages = eval_dataset[rand_idx]["messages"][:2]# Test on sampleinput_ids = tokenizer.apply_chat_template(messages,add_generation_prompt=True,return_tensors="pt").to(model.device)outputs = model.generate(input_ids,max_new_tokens=512,eos_token_id= tokenizer.eos_token_id,do_sample=True,temperature=0.6,top_p=0.9,)response = outputs[0][input_ids.shape[-1]:]print(f"**Query:**\n{eval_dataset[rand_idx]['messages'][1]['content']}\n")print(f"**Original Answer:**\n{eval_dataset[rand_idx]['messages'][2]['content']}\n")print(f"**Generated Answer:**\n{tokenizer.decode(response,skip_special_tokens=True)}")# **Query:**# How long was the Revolutionary War?# **Original Answer:**# The American Revolutionary War lasted just over seven years. The war started on April 19, 1775, and ended on September 3, 1783.# **Generated Answer:**# The Revolutionary War, also known as the American Revolution, was an 18th-century war fought between the Kingdom of Great Britain and the Thirteen Colonies. The war lasted from 1775 to 1783.

至此,主要流程就介紹完了,心動(dòng)不如行動(dòng),趕緊從第一步開(kāi)始操作吧。

? ??? ? 250??? Hugging Face? ?? ???? Llama 3? ???? ?? ???? ??? ??????.? ?? ?????. ??? ??? PHP ??? ????? ?? ?? ??? ?????!

? ????? ??
? ?? ??? ????? ???? ??? ??????, ???? ?????? ????. ? ???? ?? ???? ?? ??? ?? ????. ???? ??? ???? ???? ??? ?? admin@php.cn?? ?????.

? AI ??

Undresser.AI Undress

Undresser.AI Undress

???? ?? ??? ??? ?? AI ?? ?

AI Clothes Remover

AI Clothes Remover

???? ?? ???? ??? AI ?????.

Video Face Swap

Video Face Swap

??? ??? AI ?? ?? ??? ???? ?? ???? ??? ?? ????!

???

??? ??

???++7.3.1

???++7.3.1

???? ?? ?? ?? ???

SublimeText3 ??? ??

SublimeText3 ??? ??

??? ??, ???? ?? ????.

???? 13.0.1 ???

???? 13.0.1 ???

??? PHP ?? ?? ??

???? CS6

???? CS6

??? ? ?? ??

SublimeText3 Mac ??

SublimeText3 Mac ??

? ??? ?? ?? ?????(SublimeText3)

???

??? ??

??? ????
1601
29
PHP ????
1502
276
???
Python shutil rmtree ?? Python shutil rmtree ?? Aug 01, 2025 am 05:47 AM

shutil.rmtree ()? ?? ???? ??? ?? ??? ???? ???? ?????. ??? ??? ?? ??? ??? ? ????. 1. ?? ??? : shutil.rmtree (Path)? ???? ????? ???? filenotfounderRor, AprismenterRor ? ?? ??? ???????. 2. ?? ?? ???? : ?? ??? ?? ?? ????? ?? ? ?? ???? ?? ???? ? ??? ???? ??? ?? ? ????. 3. ?? : ?? ??? ???? ????. ??? ???? ?? ? filenotfounderror? ?????. ???? ?? ???? ?? ?? ? ? ????. 4. ??? ?? ?? : ingore_errors = true? ??? ??? ? ????

Python?? SQL ??? ???? ??? ?????? Python?? SQL ??? ???? ??? ?????? Aug 02, 2025 am 01:56 AM

?? ?????? ????? ??????. 2. Connect ()? ???? ??????? ??????. 3. ?? ??? ????. 4. Execute () ?? Executemany ()? ???? SQL? ???? ?? ??? ? ??? ???? ??? ??????. 5. ??? ???? fetchall () ?? ??????. 6. ?? ? Commit ()? ?????. 7. ????? ??? ??? ???? ???? ???? ???? ??????. ??? ????? SQL ??? ???? ????? ?????.

Safari? ? ??? ?? ???? ???? ????? Safari? ? ??? ?? ???? ???? ????? Aug 03, 2025 am 03:13 AM

Safari? ?? ???? ???? ?? ??? ??? ?? ?? ?? ?? ?, ????? ???? ? ??, ?? ? ??? ?? ? ?? ??? ??? ?? ????? ?????. ??, ?? ?? ??? ?? ??? ??? ?? ???? ??? ?? ? ???? ??? ?? "?? ??"? ???? ??? ?? ???? ???? ? ?? ???? ?? ??? ??? ????? ?? ?? ????. ??, ?? ?? ? ???? ? ???? ???? ?????. ???? ????? ?????? ? ??? ???? Preload Best Match? ??? ?? ??> ???? ???????. ??, ????? ?? ???? ????? ???? ??? ??? ??? ? ????. "?????"?? "?? ??"? ???? ???? ??? ?? ??? ? ????. ????? Safa? ??????

???? ?? ?????? ???? ???? ??? ?????? ???? ?? ?????? ???? ???? ??? ?????? Aug 02, 2025 pm 01:15 PM

Multiprocessing.queue? ???? ?? ?????? ???? ???? ???? ?? ??? ? ???? ????? ?????. 2. Multiprocessing.pipe? ???? ? ???? ?? ??? ?? ??? ????? 2 ? ??? ????; 3. ?? ??? ???? ??? ??? ??? ?? ???? ???? ?? ??? ??? ?? ?? ??? ???????. 4. ???? ???? ?? ? ??? ?? ??? ??? ??? ???? ?? ????? ??? ?? ??? ?? ????? ????? ?????. ??? ??, ?? ?? ?? ? ???? ?? ??? ??? ???????. ???? ???? ????? ?? ?????.

Python Boto3 S3 ??? ? Python Boto3 S3 ??? ? Aug 02, 2025 pm 01:08 PM

Boto3? ???? ??? S3? ????? Boto3? ?? ???? AWS ?? ??? ??????. 2. boto3.client ( 's3')? ?? ?????? ???? ?? ??? ?????? upload_file () ???? ??????. 3. S3_Key? ?? ??? ???? ?? ?? ??? ???? ?? ?? ?? ?? ??? ??? ? ????. 4. filenotfounderror, nocredentialserror ? clienterRor? ?? ??? ????????. 5. ACL, ContentType, StorageClass ? Metadata? ???? args ?? ??? ?? ??? ? ????. 6. ??? ???? ?? Bytesio? ???? ??? ?? ? ????.

????? ??? ???? ?? ??? ??? ???? ??? ?????? ????? ??? ???? ?? ??? ??? ???? ??? ?????? Aug 03, 2025 am 06:45 AM

pythonlistscani ?? () penouspop () penouspop () popopoperations.1.useappend () 2- ??? stotetopoftestack.2.usep op () toremoveAndreturnthetop ??, leftertestackisnotempoavoidindexerror.3. pekattehatopelement on -pekattehatopelement withhithithithithithithithithithithithatheptestacke

Ethereum Shines : Bank of America? ??? ?? ??? ???? ETH? ?? ??????. Ethereum Shines : Bank of America? ??? ?? ??? ???? ETH? ?? ??????. Aug 01, 2025 pm 08:09 PM

Bank of America? ?? ???? ?? ??? ??? ???? ?? ??? ?? ??? ?????. 1. ??? ??? ??; 2. ??? ??? ???? ??? ?? ? ???. 3. ?? ?? ???? ??; 4. ?? ???? ??? ETH? ??? ??? "??? ??"?? ?????. ?? ??? ??? DAPP ???, 1. ?? ?, ?? ? ?? ???? ????? ?? POS? ??? ??????? ????????. 2. Defi? ????? ??, ?? ? ?? ?? ??? ??; 3. NFT ??? ???? ?? ?? ??? ????. 4. ??? ??? ?? ?????? ?? ?????? ??; 5. EIP-1559? ???? ??????? ????? ????? ????. ?? ?? ????? ??? ?????. 1. Binance (???)

Ouyi Exchange App Android ?? v6.132.0 OUYI ? ?? ? ??? ???? ? ?? ??? 2025 Ouyi Exchange App Android ?? v6.132.0 OUYI ? ?? ? ??? ???? ? ?? ??? 2025 Aug 04, 2025 pm 11:18 PM

OKX? ????? ??? ??? ? ??? ?? ??? ?????, ????? ??, ??, ?? ?? ??? ??? ?? ? ???? ???? ??? ?? ??? ??? ?? ??? ?? ?? ?? ?? ??? ?? ????? ?? ????????.

See all articles