Young cryptoanalyst Georgie is investigating di?erent schemes of generating random integer numbers ranging from 0 to m?1. He thinks that standard random number generators are not good enough, so he has invented his own scheme that is intended to bring more randomness into the generated numbers. First, Georgie chooses n and generates n random integer numbers ranging from 0 to m?1. Let the numbers generated be a1,a2,...,an. After that Georgie calculates the sums of all pairs of adjacent numbers, and replaces the initial array with the array of sums, thus getting n?1 numbers: a1+a2,a2+ a3,...,an?1 + an. Then he applies the same procedure to the new array, getting n?2 numbers. The procedure is repeated until only one number is left. This number is then taken modulo m. That gives the result of the generating procedure. Georgie has proudly presented this scheme to his computer science teacher, but was pointed out that the scheme has many drawbacks. One important drawback is the fact that the result of the procedure sometimes does not even depend on some of the initially generated numbers. For example, if n = 3 and m = 2, then the result does not depend on a2. Now Georgie wants to investigate this phenomenon. He calls the i-th element of the initial array irrelevant if the result of the generating procedure does not depend on ai. He considers various n and m and wonders which elements are irrelevant for these parameters. Help him to ?nd it out.
Input Input ?le contains several datasets. Each datasets has n and m (1 ≤ n ≤ 100000, 2 ≤ m ≤ 109) in a single line.
Output
On the ?rst line of the output for each dataset print the number of irrelevant elements of the initial array for given n and m. On the second line print all such i that i-th element is irrelevant. Numbers on the second line must be printed in the ascending order and must be separated by spaces.
Sample Input
3 2
Sample Output
1 2
#include<iostream>
#include<algorithm>
#include<string>
#include<map>//int dx[4]={0,0,-1,1};int dy[4]={-1,1,0,0};
#include<queue>//int gcd(int a,int b){return b?gcd(b,a%b):a;}
#include<vector>
#include<cmath>
#include<stack>
#include<string.h>
#include<stdlib.h>
#include<cstdio>
#define ll long long
#define maxn 100005
#define eps 0.0000001
using namespace std;
#pragma comment(linker, "/STACK:1024000000,1024000000") ///在c++中是防止暴栈用的int n,m;
///int e[maxn];///素数的指数计数
int em[maxn],en[maxn];///要判断的数的计数
/*
题目大意:对于给定的n个数,
迭代的不断求相邻两个数之和,
依次排列往下,直至只剩下一个数。
那么对于给定的m,求出其无关的数字并输出下标号。首先杨辉三角的理论很好看出来,
最后的系数是多项式展开系数。
但因为n的范围过大,所以用递推存储中间结果是行不通的。
(除非用高精度)。
那么这样的话唯一性分解定理的优势就发挥出来了,
实际上结果的影响因子主要体现在中间结果是否能被m整除。
那么记录m的因子计数数组和因子,在迭代的过程中不断比较因子数组即可。这里有一个坑点:m的数据范围,
很明显比n大了很多,所以在筛选m的因子的过程中,
如果最后没筛干净且m比n大,则直接continue,
很明显不存在这样的结果,那样的话就可以把复杂度降低为nlogn了。
*/void change(int x,int d)
{for(int i=2;i*i<=x;i++){while(x%i==0){en[i]+=d;x/=i;}}if(x>1) en[x]+=d;
}int main()
{while(scanf("%d%d",&n,&m)!=EOF){memset(en,0,sizeof(en));memset(em,0,sizeof(em));vector<int> pm;for(int i=2;i*i<=m;i++){if(m%i==0){pm.push_back(i);while(m%i==0){em[i]++;m/=i;}if(m==1) break;}}if(m>1){if(m>n)///重点的剪枝,数据范围观察技巧{puts("0");puts("");continue;}em[m]++;pm.push_back(m);}int tn=n-1;int ans[maxn],cnt=0;for(int i=1;i<n;i++){change(tn-i+1,1);change(i,-1);int flag=1;for(int j=0;j<pm.size();j++){if(em[pm[j]]>en[pm[j]]){flag=0;break;}}if(flag) ans[cnt++]=i+1;}printf("%d\n",cnt);if(cnt){printf("%d",ans[0]);for(int i=1;i<cnt;i++) printf(" %d",ans[i]);puts("");}else puts("");}return 0;
}