Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HUDI-8400] apply 'write.ignore.failed' when write data failed v2 #12150

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

fhan688
Copy link
Contributor

@fhan688 fhan688 commented Oct 23, 2024

Change Logs

In Flink engine, if exception occurs when task writing data, it will be ignored and the exception will be reported to StreamWriteCoordinator with write event, StreamWriteCoordinator will decide whether to commit when there is write failure according to 'write.ignore.failed'.

This PR apply 'write.ignore.failed' ahead when write failure occurs, thus throw an exception faster.

for example:
CP interval of Flink job is 15 minutes, the exception will not be found until CP commit, it will make a longer data latency in real-time sensitive scenarios.

Impact

module: hudi-client、 hudi-flink-datasource

Risk level (write none, low medium or high below)

low

Documentation Update

None

Contributor's checklist

  • Read through contributor's guide
  • Change Logs and Impact were stated clearly
  • Adequate tests were added if applicable
  • CI passed

@github-actions github-actions bot added the size:S PR with lines of changes in (10, 100] label Oct 23, 2024
@fhan688
Copy link
Contributor Author

fhan688 commented Oct 23, 2024

previous PR was reverted #12136, I reopen it and maybe more discussion is needed. @danny0405

@danny0405
Copy link
Contributor

We should clarify these items:

  1. should we promote the write.ignore.failed option to a common write config for each engine? Previously each eagine has it's own options and behavior.
  2. should we throw the exception in write handles or in the driver(after the write status are collected);
  3. should this option by default false or true?

*/
protected void ignoreWriteFailed(Throwable throwable) {
if (config.getIgnoreWriteFailed()) {
throw new HoodieException(throwable.getMessage(), throwable);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we throw exception when 'ignore.write.failed' is true?

@fhan688
Copy link
Contributor Author

fhan688 commented Oct 25, 2024

We should clarify these items:

  1. should we promote the write.ignore.failed option to a common write config for each engine? Previously each eagine has it's own options and behavior.
  2. should we throw the exception in write handles or in the driver(after the write status are collected);
  3. should this option by default false or true?
  1. I agree. write.ignore.failed is a config in FlinkOptions and we promote it to HoodieWriteConfig in hudi-client-common module and named 'hoodie.write.ignore.failed' in this PR.
  2. I think fast fail is a better choice. In heavy PRD traffic job, several minutes late means huge amounts of records needs to be dealt with when restore.
  3. false, consider data quality.

@hudi-bot
Copy link

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

@danny0405
Copy link
Contributor

@fhan688 Let's fire a JIRA issue around this and move the discussion there.

@fhan688
Copy link
Contributor Author

fhan688 commented Oct 30, 2024

@danny0405
Copy link
Contributor

OK. https://issues.apache.org/jira/browse/HUDI-8400

Sorry, I meant the GH issue, which is more easier to communicate.

@fhan688
Copy link
Contributor Author

fhan688 commented Oct 31, 2024

OK. https://issues.apache.org/jira/browse/HUDI-8400

Sorry, I meant the GH issue, which is more easier to communicate.

Thanks. #12187

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:S PR with lines of changes in (10, 100]
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants