Skip to content

Commit

Permalink
Update README_intel.md (#1159)
Browse files Browse the repository at this point in the history
  • Loading branch information
gfursin authored Mar 7, 2024
2 parents 377e121 + c4313f3 commit 6cb8886
Showing 1 changed file with 12 additions and 4 deletions.
16 changes: 12 additions & 4 deletions docs/mlperf/inference/gpt-j/README_intel.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,22 @@ cm run script --tags=run-mlperf,inference,_find-performance \
* Intel implementation currently supports only datacenter scenarios


### Do full accuracy and performance runs for all the scenarios
### Do full accuracy and performance runs for all the Offline scenario

```
cm docker script --tags=run-mlperf,inference,_submission,_all-scenarios \
--model=gptj-99 --implementation=intel-original --backend=pytorch \
--category=datacenter --division=open --quiet
cm run script --tags=run-mlperf,inference,_submission \
--scenario=Offline --model=gptj-99 --implementation=intel-original --backend=pytorch \
--category=datacenter --division=open --execution-mode=valid --quiet
```

### Do full accuracy and performance runs for all the Server scenario

```
cm run script --tags=run-mlperf,inference,_submission \
--scenario=Server --model=gptj-99 --implementation=intel-original --backend=pytorch \
--category=datacenter --division=open --execution-mode=valid --server_target_qps=0.3 --quiet
```
* `--server_target_qps` can be adjusted to the maximum as per the given system (which produces a valid result)

### Generate and upload MLPerf submission

Expand Down

0 comments on commit 6cb8886

Please sign in to comment.