You've already forked DataMate
feat:问题生成过程优化及COT数据生成优化 (#169)
* fix(chart): update Helm chart helpers and values for improved configuration * feat(SynthesisTaskTab): enhance task table with tooltip support and improved column widths * feat(CreateTask, SynthFileTask): improve task creation and detail view with enhanced payload handling and UI updates * feat(SynthFileTask): enhance file display with progress tracking and delete action * feat(SynthFileTask): enhance file display with progress tracking and delete action * feat(SynthDataDetail): add delete action for chunks with confirmation prompt * feat(SynthDataDetail): update edit and delete buttons to icon-only format * feat(SynthDataDetail): add confirmation modals for chunk and synthesis data deletion * feat(DocumentSplitter): add enhanced document splitting functionality with CJK support and metadata detection * feat(DataSynthesis): refactor data synthesis models and update task handling logic * feat(DataSynthesis): streamline synthesis task handling and enhance chunk processing logic * feat(DataSynthesis): refactor data synthesis models and update task handling logic * fix(generation_service): ensure processed chunks are incremented regardless of question generation success * feat(CreateTask): enhance task creation with new synthesis templates and improved configuration options * feat(CreateTask): enhance task creation with new synthesis templates and improved configuration options * feat(CreateTask): enhance task creation with new synthesis templates and improved configuration options * feat(CreateTask): enhance task creation with new synthesis templates and improved configuration options
This commit is contained in:
@@ -13,7 +13,7 @@ from app.db.models.data_synthesis import DataSynthesisFileInstance, SynthesisDat
|
||||
from app.db.session import AsyncSessionLocal
|
||||
from app.module.evaluation.schema.evaluation import SourceType
|
||||
from app.module.shared.schema import TaskStatus
|
||||
from app.module.shared.util.model_chat import call_openai_style_model, _extract_json_substring
|
||||
from app.module.shared.util.model_chat import call_openai_style_model, extract_json_substring
|
||||
from app.module.evaluation.schema.prompt import get_prompt
|
||||
from app.module.shared.util.structured_file import StructuredFileHandlerFactory
|
||||
from app.module.system.service.common_service import get_model_by_id
|
||||
@@ -36,8 +36,8 @@ class EvaluationExecutor:
|
||||
.replace("{question}", eval_content.get("instruction")))
|
||||
.replace("{answer}", eval_content.get("output")))
|
||||
if self.task.task_type == "COT":
|
||||
prompt_text = ((prompt_text.replace("{question}", eval_content.get("question"))
|
||||
.replace("{conclusion}", eval_content.get("conclusion")))
|
||||
prompt_text = ((prompt_text.replace("{question}", eval_content.get("instruction"))
|
||||
.replace("{conclusion}", eval_content.get("output")))
|
||||
.replace("{chain_of_thought}", eval_content.get("chain_of_thought")))
|
||||
return prompt_text
|
||||
|
||||
@@ -73,7 +73,7 @@ class EvaluationExecutor:
|
||||
call_openai_style_model, model_config.base_url, model_config.api_key, model_config.model_name,
|
||||
prompt_text,
|
||||
)
|
||||
resp_text = _extract_json_substring(resp_text)
|
||||
resp_text = extract_json_substring(resp_text)
|
||||
try:
|
||||
json.loads(resp_text)
|
||||
except Exception as e:
|
||||
|
||||
Reference in New Issue
Block a user